-
Notifications
You must be signed in to change notification settings - Fork 13
feat: unify numeric operand promotion #139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
OSL ChatGPT ReviewerNOTE: This is generated by an AI program, so some comments may not make sense. src/irx/builders/llvmliteir.py
LGTM! tests/test_llvmlite_helpers.py
|
|
@xmnlab @yuvimittal please have a look |
OSL ChatGPT ReviewerNOTE: This is generated by an AI program, so some comments may not make sense. src/irx/builders/llvmliteir.py
tests/test_llvmlite_helpers.pyLGTM! |
OSL ChatGPT ReviewerNOTE: This is generated by an AI program, so some comments may not make sense. src/irx/builders/llvmliteir.py
tests/test_llvmlite_helpers.pyLGTM! |
OSL ChatGPT ReviewerNOTE: This is generated by an AI program, so some comments may not make sense. src/irx/builders/llvmliteir.pyChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}} tests/test_llvmlite_helpers.pyChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR centralizes numeric operand promotion for binary operations by introducing _unify_numeric_operands and related helper methods, replacing scattered scalar-vector promotion logic with a unified approach that handles ints, floats, and vectors consistently before LLVM emission.
Changes:
- Added
_unify_numeric_operandsmethod and supporting helpers (_select_float_type,_float_type_from_width,_float_bit_width,_cast_value_to_type,_is_numeric_value) to standardize numeric type promotion - Replaced 60+ lines of duplicated scalar-vector promotion logic with calls to the new unified method
- Added comprehensive unit tests covering scalar-to-vector, float-to-double, and int-to-float promotion scenarios
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
src/irx/builders/llvmliteir.py |
Adds centralized numeric promotion infrastructure with helper methods and integrates it into BinaryOp visitor, removing old scattered promotion logic |
tests/test_llvmlite_helpers.py |
Adds three unit tests validating scalar-to-vector promotion, float type widening, and mixed int-float promotion |
Comments suppressed due to low confidence (2)
src/irx/builders/llvmliteir.py:711
- After calling
_unify_numeric_operands, vector operands are guaranteed to have matching counts and element types. However, the subsequent checks on lines 703-711 duplicate these validations. Since_unify_numeric_operandsalready ensures vector size and element type consistency (lines 475-478 check vector size mismatch, and the promotion logic ensures matching element types), these redundant checks could be removed or moved into a separate validation function to improve code clarity and reduce duplication.
if llvm_lhs.type.count != llvm_rhs.type.count:
raise Exception(
f"Vector size mismatch: {llvm_lhs.type} vs {llvm_rhs.type}"
)
if llvm_lhs.type.element != llvm_rhs.type.element:
raise Exception(
f"Vector element type mismatch: "
f"{llvm_lhs.type.element} vs {llvm_rhs.type.element}"
)
src/irx/builders/llvmliteir.py:792
- Scalar numeric operands are being promoted twice: first by
_unify_numeric_operands(lines 698-700), and then again bypromote_operands(line 792). This redundant promotion is inefficient. Consider either: (1) skipping_unify_numeric_operandsfor scalar-only operations, or (2) removing thepromote_operandscall for numeric types since they've already been unified. The behavior should be correct since both methods use compatible promotion strategies, but the double work is unnecessary.
if self._is_numeric_value(llvm_lhs) and self._is_numeric_value(
llvm_rhs
):
llvm_lhs, llvm_rhs = self._unify_numeric_operands(
llvm_lhs, llvm_rhs
)
# If both operands are LLVM vectors, handle as vector ops
if is_vector(llvm_lhs) and is_vector(llvm_rhs):
if llvm_lhs.type.count != llvm_rhs.type.count:
raise Exception(
f"Vector size mismatch: {llvm_lhs.type} vs {llvm_rhs.type}"
)
if llvm_lhs.type.element != llvm_rhs.type.element:
raise Exception(
f"Vector element type mismatch: "
f"{llvm_lhs.type.element} vs {llvm_rhs.type.element}"
)
is_float_vec = is_fp_type(llvm_lhs.type.element)
op = node.op_code
set_fast = is_float_vec and getattr(node, "fast_math", False)
if op == "*" and is_float_vec and getattr(node, "fma", False):
if not hasattr(node, "fma_rhs"):
raise Exception("FMA requires a third operand (fma_rhs)")
self.visit(node.fma_rhs)
llvm_fma_rhs = safe_pop(self.result_stack)
if llvm_fma_rhs.type != llvm_lhs.type:
raise Exception(
f"FMA operand type mismatch: "
f"{llvm_lhs.type} vs {llvm_fma_rhs.type}"
)
if set_fast:
self.set_fast_math(True)
try:
result = self._emit_fma(llvm_lhs, llvm_rhs, llvm_fma_rhs)
finally:
if set_fast:
self.set_fast_math(False)
self.result_stack.append(result)
return
if set_fast:
self.set_fast_math(True)
try:
if op == "+":
if is_float_vec:
result = self._llvm.ir_builder.fadd(
llvm_lhs, llvm_rhs, name="vfaddtmp"
)
self._apply_fast_math(result)
else:
result = self._llvm.ir_builder.add(
llvm_lhs, llvm_rhs, name="vaddtmp"
)
elif op == "-":
if is_float_vec:
result = self._llvm.ir_builder.fsub(
llvm_lhs, llvm_rhs, name="vfsubtmp"
)
self._apply_fast_math(result)
else:
result = self._llvm.ir_builder.sub(
llvm_lhs, llvm_rhs, name="vsubtmp"
)
elif op == "*":
if is_float_vec:
result = self._llvm.ir_builder.fmul(
llvm_lhs, llvm_rhs, name="vfmultmp"
)
self._apply_fast_math(result)
else:
result = self._llvm.ir_builder.mul(
llvm_lhs, llvm_rhs, name="vmultmp"
)
elif op == "/":
if is_float_vec:
result = self._llvm.ir_builder.fdiv(
llvm_lhs, llvm_rhs, name="vfdivtmp"
)
self._apply_fast_math(result)
else:
unsigned = getattr(node, "unsigned", None)
if unsigned is None:
raise Exception(
"Cannot infer integer division signedness "
"for vector op"
)
result = emit_int_div(
self._llvm.ir_builder, llvm_lhs, llvm_rhs, unsigned
)
else:
raise Exception(f"Vector binop {op} not implemented.")
finally:
if set_fast:
self.set_fast_math(False)
self.result_stack.append(result)
return
# Scalar Fallback: Original scalar promotion logic
llvm_lhs, llvm_rhs = self.promote_operands(llvm_lhs, llvm_rhs)
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def _unify_numeric_operands( | ||
| self, lhs: ir.Value, rhs: ir.Value | ||
| ) -> tuple[ir.Value, ir.Value]: | ||
| """Ensure numeric operands share shape and scalar type.""" |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _unify_numeric_operands method would benefit from more detailed documentation. The current docstring "Ensure numeric operands share shape and scalar type" is minimal. Consider documenting: (1) the promotion rules (e.g., int promotes to float, narrower types promote to wider), (2) parameter types and constraints, (3) return value guarantees, (4) what exceptions can be raised, and (5) examples of transformations. This is a critical function for type safety and clear documentation would help maintainers understand the promotion semantics.
| """Ensure numeric operands share shape and scalar type.""" | |
| """ | |
| Normalize two numeric LLVM values to a common scalar type and shape. | |
| This helper is used before emitting arithmetic or comparison | |
| instructions so that both operands are type-compatible. It supports | |
| scalar and vector integers / floating-point values and performs both | |
| scalar type promotion and optional scalar-to-vector splatting. | |
| Promotion rules | |
| --------------- | |
| * Shape: | |
| - If both operands are vectors, they must have the same number of | |
| lanes; otherwise an Exception is raised. | |
| - If exactly one operand is a vector, its lane count is used as the | |
| target shape and the scalar operand is splatted to a vector of the | |
| same lane count after type promotion. | |
| - If both operands are scalars, the result operands remain scalars. | |
| * Scalar type: | |
| - If either operand has a floating-point scalar type, both operands | |
| are promoted to a common floating-point type selected via | |
| ``self._select_float_type`` from the floating-point candidates. | |
| - If both operands have integer scalar types, both are promoted to an | |
| integer type with ``width = max(lhs.width, rhs.width)`` (at least | |
| 1 bit), preserving signedness semantics as implemented by | |
| ``_cast_value_to_type``. | |
| Parameters | |
| ---------- | |
| lhs : llvmlite.ir.Value | |
| Left-hand numeric operand. May be a scalar or vector of integer or | |
| floating-point type. | |
| rhs : llvmlite.ir.Value | |
| Right-hand numeric operand. May be a scalar or vector of integer or | |
| floating-point type. | |
| Returns | |
| ------- | |
| (llvmlite.ir.Value, llvmlite.ir.Value) | |
| A pair ``(lhs', rhs')`` where: | |
| * ``lhs'.type`` and ``rhs'.type`` have the same scalar element type. | |
| * If either operand is a vector, both results are vectors with the | |
| same lane count. | |
| Raises | |
| ------ | |
| Exception | |
| If both operands are vectors and their lane counts (``.count``) | |
| differ. | |
| Any exception raised by ``_cast_value_to_type`` may also propagate if | |
| the operands cannot be safely cast to the selected target type. | |
| Examples | |
| -------- | |
| * ``i32 + i64`` -> both operands promoted to ``i64``. | |
| * ``float + i32`` -> both operands promoted to ``float``. | |
| * ``<4 x i16> + i32`` -> scalar ``i32`` cast to ``i32`` then splatted | |
| to ``<4 x i32>`` to match the vector operand. | |
| """ |
| float_candidates = [ | ||
| ty for ty in (lhs_base_ty, rhs_base_ty) if is_fp_type(ty) | ||
| ] | ||
| target_scalar_ty = self._select_float_type(float_candidates) |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When mixing integer and floating-point operands, the integer width is not considered when selecting the target float type. For example, an int64 combined with a float32 will promote both to float32, which can cause precision loss since float32 cannot accurately represent all int64 values. Consider promoting to at least double (float64) when the integer operand has width > 32 bits, or document this behavior if the precision loss is acceptable for your use case.
| target_scalar_ty = self._select_float_type(float_candidates) | |
| target_scalar_ty = self._select_float_type(float_candidates) | |
| # If we are mixing an integer with a floating-point value, ensure that | |
| # wide integers (> 32 bits) are promoted to at least double precision | |
| # to avoid excessive precision loss when the selected float type is | |
| # narrower than 64 bits. | |
| if lhs_is_float != rhs_is_float: | |
| int_base_ty = lhs_base_ty if not lhs_is_float else rhs_base_ty | |
| int_width = getattr(int_base_ty, "width", 0) | |
| # Determine the bit-width of the selected floating-point type. | |
| float_bits = 0 | |
| if isinstance(target_scalar_ty, HalfType): | |
| float_bits = FLOAT16_BITS | |
| elif isinstance(target_scalar_ty, FloatType): | |
| float_bits = FLOAT32_BITS | |
| elif isinstance(target_scalar_ty, DoubleType): | |
| float_bits = FLOAT64_BITS | |
| elif FP128Type is not None and isinstance(target_scalar_ty, FP128Type): | |
| float_bits = FLOAT128_BITS | |
| # Upgrade to double precision when combining a wide integer with | |
| # a float type that is narrower than 64 bits. | |
| if int_width > 32 and float_bits and float_bits < FLOAT64_BITS: | |
| target_scalar_ty = DoubleType() |
|
|
||
| if lhs_is_vec and rhs_is_vec and lhs.type.count != rhs.type.count: | ||
| raise Exception( | ||
| f"Vector size mismatch: {lhs.type.count} vs {rhs.type.count}" |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message could be more informative by including the operation context. Instead of just "Vector size mismatch: X vs Y", consider including information about what operation was being attempted (e.g., "Binary operation '+' requires matching vector sizes, but got X vs Y"). This would help developers debug issues more quickly.
| f"Vector size mismatch: {lhs.type.count} vs {rhs.type.count}" | |
| "Numeric operation requires matching vector sizes, " | |
| f"but got {lhs.type} (size {lhs.type.count}) vs " | |
| f"{rhs.type} (size {rhs.type.count})" |
| def _unify_numeric_operands( | ||
| self, lhs: ir.Value, rhs: ir.Value | ||
| ) -> tuple[ir.Value, ir.Value]: | ||
| """Ensure numeric operands share shape and scalar type.""" | ||
| lhs_is_vec = is_vector(lhs) | ||
| rhs_is_vec = is_vector(rhs) | ||
|
|
||
| if lhs_is_vec and rhs_is_vec and lhs.type.count != rhs.type.count: | ||
| raise Exception( | ||
| f"Vector size mismatch: {lhs.type.count} vs {rhs.type.count}" | ||
| ) | ||
|
|
||
| target_lanes = None | ||
| if lhs_is_vec: | ||
| target_lanes = lhs.type.count | ||
| elif rhs_is_vec: | ||
| target_lanes = rhs.type.count | ||
|
|
||
| lhs_base_ty = lhs.type.element if lhs_is_vec else lhs.type | ||
| rhs_base_ty = rhs.type.element if rhs_is_vec else rhs.type | ||
|
|
||
| lhs_is_float = is_fp_type(lhs_base_ty) | ||
| rhs_is_float = is_fp_type(rhs_base_ty) | ||
|
|
||
| if lhs_is_float or rhs_is_float: | ||
| float_candidates = [ | ||
| ty for ty in (lhs_base_ty, rhs_base_ty) if is_fp_type(ty) | ||
| ] | ||
| target_scalar_ty = self._select_float_type(float_candidates) | ||
| else: | ||
| lhs_width = getattr(lhs_base_ty, "width", 0) | ||
| rhs_width = getattr(rhs_base_ty, "width", 0) | ||
| target_scalar_ty = ir.IntType(max(lhs_width, rhs_width, 1)) | ||
|
|
||
| lhs = self._cast_value_to_type(lhs, target_scalar_ty) | ||
| rhs = self._cast_value_to_type(rhs, target_scalar_ty) | ||
|
|
||
| if target_lanes: | ||
| vec_ty = ir.VectorType(target_scalar_ty, target_lanes) | ||
| if not is_vector(lhs): | ||
| lhs = splat_scalar(self._llvm.ir_builder, lhs, vec_ty) | ||
| if not is_vector(rhs): | ||
| rhs = splat_scalar(self._llvm.ir_builder, rhs, vec_ty) | ||
|
|
||
| return lhs, rhs |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new unification logic changes the type promotion behavior compared to the removed code. Previously, when combining a float vector with a double scalar, the scalar would be truncated (fptrunc) to match the vector's element type. Now, both are promoted to the wider type (double). This is generally better for precision, but represents a behavior change that could affect existing code relying on the old behavior. Ensure this is intentional and documented, especially since it could impact numerical precision in existing computations.
| ) | ||
|
|
||
| assert is_fp_type(widened_int.type) | ||
| assert widened_float.type == visitor._llvm.FLOAT_TYPE |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing blank line before function definition. According to PEP 8, there should be two blank lines before top-level function definitions to maintain consistency with the rest of the file.
| assert widened_float.type == visitor._llvm.FLOAT_TYPE | |
| assert widened_float.type == visitor._llvm.FLOAT_TYPE |
| def test_unify_promotes_scalar_int_to_vector() -> None: | ||
| """Scalar ints splat to match vector operands and widen width.""" | ||
| visitor = LLVMLiteIRVisitor() | ||
| _prime_builder(visitor) | ||
|
|
||
| vec_ty = ir.VectorType(ir.IntType(32), 2) | ||
| vec = ir.Constant(vec_ty, [ir.Constant(ir.IntType(32), 1)] * 2) | ||
| scalar = ir.Constant(ir.IntType(16), 5) | ||
|
|
||
| promoted_vec, promoted_scalar = visitor._unify_numeric_operands( | ||
| vec, scalar | ||
| ) | ||
|
|
||
| assert isinstance(promoted_vec.type, ir.VectorType) | ||
| assert isinstance(promoted_scalar.type, ir.VectorType) | ||
| assert promoted_vec.type == vec_ty | ||
| assert promoted_scalar.type == vec_ty | ||
|
|
||
|
|
||
| def test_unify_vector_float_rank_matches_double() -> None: | ||
| """Float vectors upgrade to match double scalars.""" | ||
| visitor = LLVMLiteIRVisitor() | ||
| _prime_builder(visitor) | ||
|
|
||
| float_vec_ty = ir.VectorType(visitor._llvm.FLOAT_TYPE, 2) | ||
| float_vec = ir.Constant( | ||
| float_vec_ty, | ||
| [ | ||
| ir.Constant(visitor._llvm.FLOAT_TYPE, 1.0), | ||
| ir.Constant(visitor._llvm.FLOAT_TYPE, 2.0), | ||
| ], | ||
| ) | ||
| double_scalar = ir.Constant(visitor._llvm.DOUBLE_TYPE, 4.0) | ||
|
|
||
| widened_vec, widened_scalar = visitor._unify_numeric_operands( | ||
| float_vec, double_scalar | ||
| ) | ||
|
|
||
| assert widened_vec.type.element == visitor._llvm.DOUBLE_TYPE | ||
| assert widened_scalar.type.element == visitor._llvm.DOUBLE_TYPE | ||
|
|
||
|
|
||
| def test_unify_int_and_float_scalars_returns_float() -> None: | ||
| """Scalar int + float promotes to float for both operands.""" | ||
| visitor = LLVMLiteIRVisitor() | ||
| _prime_builder(visitor) | ||
|
|
||
| int_scalar = ir.Constant(visitor._llvm.INT32_TYPE, 7) | ||
| float_scalar = ir.Constant(visitor._llvm.FLOAT_TYPE, 1.25) | ||
|
|
||
| widened_int, widened_float = visitor._unify_numeric_operands( | ||
| int_scalar, float_scalar | ||
| ) | ||
|
|
||
| assert is_fp_type(widened_int.type) | ||
| assert widened_float.type == visitor._llvm.FLOAT_TYPE |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test coverage is missing several important edge cases for _unify_numeric_operands: (1) two vectors with mismatched element types (e.g., int32 vector vs float vector), (2) truncation scenarios where a wider type needs to be narrowed to match another operand, (3) FP128 type handling if available, (4) error case where vectors have different sizes, and (5) scalar-to-scalar integer promotion with different widths. Consider adding tests for these scenarios to ensure the unification logic handles all cases correctly.
| lanes = value.type.count | ||
| current_scalar_ty = value.type.element | ||
| target_ty = ir.VectorType(target_scalar_ty, lanes) | ||
| else: | ||
| lanes = None |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable lanes is not used.
| lanes = value.type.count | |
| current_scalar_ty = value.type.element | |
| target_ty = ir.VectorType(target_scalar_ty, lanes) | |
| else: | |
| lanes = None | |
| current_scalar_ty = value.type.element | |
| target_ty = ir.VectorType(target_scalar_ty, value.type.count) | |
| else: |
OSL ChatGPT ReviewerNOTE: This is generated by an AI program, so some comments may not make sense. src/irx/builders/llvmliteir.pyChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}} tests/test_llvmlite_helpers.pyChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}} |
Pull Request description
Centralize numeric operand promotion for binops .
Adds
_unify_numeric_operandsplus helper tests so ints/floats/vectors all go through one path before LLVM emission, preventing mismatched widths or scalar/vector handling bugs.Solve #135
How to test these changes
python -m pytest tests/test_llvmlite_helpers.py -vpre-commit run --files src/irx/builders/llvmliteir.py tests/test_llvmlite_helpers.pyPull Request checklists
This PR is a:
About this PR:
Author's checklist:
Additional information
N/A
Reviewer's checklist