Skip to content

Conversation

@omsherikar
Copy link
Contributor

Pull Request description

Centralize numeric operand promotion for binops .
Adds _unify_numeric_operands plus helper tests so ints/floats/vectors all go through one path before LLVM emission, preventing mismatched widths or scalar/vector handling bugs.

Solve #135

How to test these changes

  • python -m pytest tests/test_llvmlite_helpers.py -v
  • pre-commit run --files src/irx/builders/llvmliteir.py tests/test_llvmlite_helpers.py

Pull Request checklists

This PR is a:

  • bug-fix
  • new feature
  • maintenance

About this PR:

  • it includes tests.
  • the tests are executed on CI.
  • the tests generate log file(s) (path).
  • pre-commit hooks were executed locally.
  • this PR requires a project documentation update.

Author's checklist:

  • I have reviewed the changes and it contains no misspelling.
  • The code is well commented, especially in the parts that contain more complexity.
  • New and old tests passed locally.

Additional information

N/A

Reviewer's checklist

@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Correctness: signedness is ignored in casts.
    • _cast_value_to_type() always uses sitofp and sext, which is wrong for unsigned integers (will corrupt values). You need sign-awareness from the AST/type system. Suggest adding a signed: bool flag (default True) and using uitofp/zext when unsigned. (L.487)
      def _cast_value_to_type(self, value: ir.Value, target_scalar_ty: ir.Type, signed: bool = True) -> ir.Value:
      """Sign-aware cast of scalars or vectors to the target scalar type."""
      # use builder.uitofp(...) if not signed
      # use builder.zext(...) if not signed
  • Correctness: FP128/X86_FP80 handling is unsafe/incomplete.
    • FP128Type is referenced directly and may be undefined -> NameError at runtime. Use hasattr(ir, "FP128Type") and refer to ir.FP128Type instead. (L.460, L.476)
      def _float_type_from_width(self, width: int) -> ir.Type:
      """Create float type from bit width (safe for optional types)."""
      # use hasattr(ir, "FP128Type") and ir.FP128Type()
    • _float_type_from_width() silently falls back to 32-bit for widths in (64,128), e.g., x86_fp80, causing precision loss. Either support ir.X86_FP80Type explicitly or at least fall back to DOUBLE_TYPE instead of FLOAT_TYPE for >64 and <128. Also detect X86_FP80 in _float_bit_width(). (L.460, L.476)
      def _float_bit_width(self, ty: ir.Type) -> int:
      """Bit width including platform extended types."""
      # handle ir.X86_FP80Type if available
  • Behavior change/perf: scalar-vector promotion now widens the vector element type to the widest float (e.g., float op double -> double). This can break downstream typing assumptions and degrade perf on targets without vector f64. Consider preserving the vector element type when one operand is a vector (only cast/splat the scalar), unless your type system specifies “widest wins.” (L.441)
    def _unify_numeric_operands(self, lhs: ir.Value, rhs: ir.Value) -> tuple[ir.Value, ir.Value]:
    """Prefer vector element type when only one operand is vector."""
    # if exactly one is vector: target_scalar_ty = that vector's element type

LGTM!


tests/test_llvmlite_helpers.py

  • Potential false positive: test_unify_int_and_float_scalars_returns_float allows widened_int to be any FP type, which can mask a bug where operands are not actually unified to the same type. Tighten the assertions to ensure both operands are the same type and that it matches FLOAT_TYPE as the docstring states. (L.153-L.155)
    Replace:
    assert is_fp_type(widened_int.type)
    assert widened_float.type == visitor._llvm.FLOAT_TYPE
    With:
    assert widened_int.type == visitor._llvm.FLOAT_TYPE
    assert widened_float.type == visitor._llvm.FLOAT_TYPE
    assert widened_int.type == widened_float.type

  • Maintainability risk: tests hinge on private APIs (visitor._unify_numeric_operands and visitor._llvm.*). Consider exposing a small public helper (or alias) to stabilize the contract and avoid brittle coupling to internals. E.g., add a public wrapper in LLVMLiteIRVisitor and update these calls. (L.106, L.130, L.148)


@omsherikar
Copy link
Contributor Author

@xmnlab @yuvimittal please have a look

@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Correctness: The numeric unification runs before operator classification, so bitwise/shift ops with mixed int/float will silently cast ints to float (sitofp) and later fail or miscompile. Gate unification to arithmetic-only ops. (L.670)
    Suggested change:
    def _should_unify_as_arith(self, node: astx.BinaryOp) -> bool:
    """Decide if numeric unification applies (arith only, not bitwise/shifts)."""
    # implement based on node.op
    return node.op in {astx.Op.ADD, astx.Op.SUB, astx.Op.MUL, astx.Op.DIV, astx.Op.MOD, astx.Op.POW}

    in visit(BinaryOp):

    if self._should_unify_as_arith(node) and self._is_numeric_value(llvm_lhs) and self._is_numeric_value(llvm_rhs):
    llvm_lhs, llvm_rhs = self._unify_numeric_operands(llvm_lhs, llvm_rhs)

  • Correctness: Signedness is ignored during integer widening and int->float casts (sext/sitofp). This will break unsigned semantics (e.g., OR/AND after a sext, or uitofp required). Thread signedness from AST/type info and choose zext/uitofp when appropriate. (L.533, L.557)
    Suggested change:
    def _cast_value_to_type(self, value: ir.Value, target_scalar_ty: ir.Type, *, signed: bool) -> ir.Value:
    """Cast scalars or vectors to the target scalar type with signedness awareness."""
    builder = self._llvm.ir_builder
    value_is_vec = is_vector(value)
    lanes = value.type.count if value_is_vec else None
    current_scalar_ty = value.type.element if value_is_vec else value.type
    target_ty = ir.VectorType(target_scalar_ty, lanes) if value_is_vec else target_scalar_ty

    if current_scalar_ty == target_scalar_ty and value.type == target_ty:
        return value
    
    current_is_float = is_fp_type(current_scalar_ty)
    target_is_float = is_fp_type(target_scalar_ty)
    
    if target_is_float:
        if current_is_float:
            current_bits = self._float_bit_width(current_scalar_ty)
            target_bits = self._float_bit_width(target_scalar_ty)
            if current_bits == target_bits:
                return builder.bitcast(value, target_ty) if value.type != target_ty else value
            return builder.fpext(value, target_ty, "fpext") if current_bits < target_bits else builder.fptrunc(value, target_ty, "fptrunc")
        return (builder.sitofp if signed else builder.uitofp)(value, target_ty, "itofp")
    
    if current_is_float:
        raise Exception("Cannot implicitly convert floating-point to integer")
    
    current_width = getattr(current_scalar_ty, "width", 0)
    target_width = getattr(target_scalar_ty, "width", 0)
    if current_width == target_width:
        return builder.bitcast(value, target_ty) if value.type != target_ty else value
    return (builder.sext if signed else builder.zext)(value, target_ty, "ext") if current_width < target_width else builder.trunc(value, target_ty, "trunc")
    
    • Pass signed=... from AST type info where calling _cast_value_to_type/_unify_numeric_operands.
  • Correctness: i1 booleans are treated as numeric here and may be promoted/splat, which can corrupt logical ops. Exclude i1 from _is_numeric_value. (L.435)
    Suggested change:
    def _is_numeric_value(self, value: ir.Value) -> bool:
    """Return True if value represents an int/float scalar or vector (excluding i1)."""
    if is_vector(value):
    elem_ty = value.type.element
    if isinstance(elem_ty, ir.IntType) and getattr(elem_ty, "width", 0) == 1:
    return False
    return isinstance(elem_ty, ir.IntType) or is_fp_type(elem_ty)
    base_ty = value.type
    if isinstance(base_ty, ir.IntType) and getattr(base_ty, "width", 0) == 1:
    return False
    return isinstance(base_ty, ir.IntType) or is_fp_type(base_ty)

  • Portability: FP128Type() may not be legal on all targets even if the class exists. Prefer a target-aware handle if available (e.g., self._llvm.FP128_TYPE) and only select if the module/target supports it; otherwise fall back to DOUBLE. (L.470)
    Suggested change:
    def _float_type_from_width(self, width: int) -> ir.Type:
    """Select a usable float type for the current target."""
    if width <= FLOAT16_BITS and hasattr(self._llvm, "FLOAT16_TYPE"):
    return self._llvm.FLOAT16_TYPE
    if width <= FLOAT32_BITS:
    return self._llvm.FLOAT_TYPE
    if width <= FLOAT64_BITS:
    return self._llvm.DOUBLE_TYPE
    if hasattr(self._llvm, "FP128_TYPE"):
    return self._llvm.FP128_TYPE
    return self._llvm.DOUBLE_TYPE


tests/test_llvmlite_helpers.py

LGTM!


@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Possible NameError on FP128Type: you reference FP128Type directly without importing it. Use ir.FP128Type to avoid runtime errors. Update both checks and constructors. (L.472, L.486)

    • Replace:
      • if FP128Type is not None and width >= FLOAT128_BITS:
      • if FP128Type is not None and isinstance(ty, FP128Type):
    • With:
      • if hasattr(ir, "FP128Type") and width >= FLOAT128_BITS:
      • if hasattr(ir, "FP128Type") and isinstance(ty, ir.FP128Type):
  • Incorrect downcast of non-64/128 FP types: _float_type_from_width falls back to 32-bit float for widths >64 and <128 (e.g., x86 80-bit). Add explicit support for X86_FP80 to prevent precision loss. (L.468)

    • Suggested change:
      def _float_type_from_width(self, width: int) -> ir.Type:
      """Select float type by bit-width"""
      if width <= FLOAT16_BITS and hasattr(self._llvm, "FLOAT16_TYPE"):
      return self._llvm.FLOAT16_TYPE
      if width <= FLOAT32_BITS:
      return self._llvm.FLOAT_TYPE
      if width <= FLOAT64_BITS:
      return self._llvm.DOUBLE_TYPE
      if hasattr(ir, "X86_FP80Type") and width <= 80:
      return ir.X86_FP80Type()
      if hasattr(ir, "FP128Type") and width >= FLOAT128_BITS:
      return ir.FP128Type()
      return self._llvm.FLOAT_TYPE
  • Signedness bugs in integer promotions/casts:

    • Widening ints uses sext, which is wrong for unsigned/boolean operands; and int->float uses sitofp, which is wrong for unsigned. At minimum, treat i1 as unsigned to avoid -1 for True. (L.520, L.536)
    • Suggested changes:
      def _cast_value_to_type(self, value: ir.Value, target_scalar_ty: ir.Type) -> ir.Value:
      """Cast scalars or vectors to the target scalar type"""
      ...
      if target_is_float:
      if current_is_float:
      ...
      # int -> float
      if isinstance(current_scalar_ty, ir.IntType) and getattr(current_scalar_ty, "width", 0) == 1:
      return builder.uitofp(value, target_ty, "uitofp") # bools are unsigned
      return builder.sitofp(value, target_ty, "sitofp")
      ...
      # int -> wider int
      if current_width < target_width:
      if current_width == 1:
      return builder.zext(value, target_ty, "zext") # preserve boolean semantics
      return builder.sext(value, target_ty, "sext")
  • Vector element width selection: when neither operand is float, you pick max(lhs_width, rhs_width, 1). If either width lookup fails (returns 0), this can produce i1 and silently narrow. Consider asserting widths > 0 for ints to avoid accidental i1. (L.449)

    • Suggested guard:
      def _unify_numeric_operands(self, lhs: ir.Value, rhs: ir.Value) -> tuple[ir.Value, ir.Value]:
      """Ensure numeric operands share shape and scalar type"""
      ...
      if not (isinstance(lhs_base_ty, ir.IntType) and isinstance(rhs_base_ty, ir.IntType)):
      ...
      lhs_width = getattr(lhs_base_ty, "width", 0)
      rhs_width = getattr(rhs_base_ty, "width", 0)
      if lhs_width <= 0 or rhs_width <= 0:
      raise Exception("Unsupported integer type without width")
      target_scalar_ty = ir.IntType(max(lhs_width, rhs_width))

tests/test_llvmlite_helpers.py

LGTM!


Copilot AI review requested due to automatic review settings January 19, 2026 08:27
@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.&#x27;, 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

tests/test_llvmlite_helpers.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.&#x27;, 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR centralizes numeric operand promotion for binary operations by introducing _unify_numeric_operands and related helper methods, replacing scattered scalar-vector promotion logic with a unified approach that handles ints, floats, and vectors consistently before LLVM emission.

Changes:

  • Added _unify_numeric_operands method and supporting helpers (_select_float_type, _float_type_from_width, _float_bit_width, _cast_value_to_type, _is_numeric_value) to standardize numeric type promotion
  • Replaced 60+ lines of duplicated scalar-vector promotion logic with calls to the new unified method
  • Added comprehensive unit tests covering scalar-to-vector, float-to-double, and int-to-float promotion scenarios

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.

File Description
src/irx/builders/llvmliteir.py Adds centralized numeric promotion infrastructure with helper methods and integrates it into BinaryOp visitor, removing old scattered promotion logic
tests/test_llvmlite_helpers.py Adds three unit tests validating scalar-to-vector promotion, float type widening, and mixed int-float promotion
Comments suppressed due to low confidence (2)

src/irx/builders/llvmliteir.py:711

  • After calling _unify_numeric_operands, vector operands are guaranteed to have matching counts and element types. However, the subsequent checks on lines 703-711 duplicate these validations. Since _unify_numeric_operands already ensures vector size and element type consistency (lines 475-478 check vector size mismatch, and the promotion logic ensures matching element types), these redundant checks could be removed or moved into a separate validation function to improve code clarity and reduce duplication.
            if llvm_lhs.type.count != llvm_rhs.type.count:
                raise Exception(
                    f"Vector size mismatch: {llvm_lhs.type} vs {llvm_rhs.type}"
                )
            if llvm_lhs.type.element != llvm_rhs.type.element:
                raise Exception(
                    f"Vector element type mismatch: "
                    f"{llvm_lhs.type.element} vs {llvm_rhs.type.element}"
                )

src/irx/builders/llvmliteir.py:792

  • Scalar numeric operands are being promoted twice: first by _unify_numeric_operands (lines 698-700), and then again by promote_operands (line 792). This redundant promotion is inefficient. Consider either: (1) skipping _unify_numeric_operands for scalar-only operations, or (2) removing the promote_operands call for numeric types since they've already been unified. The behavior should be correct since both methods use compatible promotion strategies, but the double work is unnecessary.
        if self._is_numeric_value(llvm_lhs) and self._is_numeric_value(
            llvm_rhs
        ):
            llvm_lhs, llvm_rhs = self._unify_numeric_operands(
                llvm_lhs, llvm_rhs
            )
        # If both operands are LLVM vectors, handle as vector ops
        if is_vector(llvm_lhs) and is_vector(llvm_rhs):
            if llvm_lhs.type.count != llvm_rhs.type.count:
                raise Exception(
                    f"Vector size mismatch: {llvm_lhs.type} vs {llvm_rhs.type}"
                )
            if llvm_lhs.type.element != llvm_rhs.type.element:
                raise Exception(
                    f"Vector element type mismatch: "
                    f"{llvm_lhs.type.element} vs {llvm_rhs.type.element}"
                )
            is_float_vec = is_fp_type(llvm_lhs.type.element)
            op = node.op_code
            set_fast = is_float_vec and getattr(node, "fast_math", False)
            if op == "*" and is_float_vec and getattr(node, "fma", False):
                if not hasattr(node, "fma_rhs"):
                    raise Exception("FMA requires a third operand (fma_rhs)")
                self.visit(node.fma_rhs)
                llvm_fma_rhs = safe_pop(self.result_stack)
                if llvm_fma_rhs.type != llvm_lhs.type:
                    raise Exception(
                        f"FMA operand type mismatch: "
                        f"{llvm_lhs.type} vs {llvm_fma_rhs.type}"
                    )
                if set_fast:
                    self.set_fast_math(True)
                try:
                    result = self._emit_fma(llvm_lhs, llvm_rhs, llvm_fma_rhs)
                finally:
                    if set_fast:
                        self.set_fast_math(False)
                self.result_stack.append(result)
                return
            if set_fast:
                self.set_fast_math(True)
            try:
                if op == "+":
                    if is_float_vec:
                        result = self._llvm.ir_builder.fadd(
                            llvm_lhs, llvm_rhs, name="vfaddtmp"
                        )
                        self._apply_fast_math(result)
                    else:
                        result = self._llvm.ir_builder.add(
                            llvm_lhs, llvm_rhs, name="vaddtmp"
                        )
                elif op == "-":
                    if is_float_vec:
                        result = self._llvm.ir_builder.fsub(
                            llvm_lhs, llvm_rhs, name="vfsubtmp"
                        )
                        self._apply_fast_math(result)
                    else:
                        result = self._llvm.ir_builder.sub(
                            llvm_lhs, llvm_rhs, name="vsubtmp"
                        )
                elif op == "*":
                    if is_float_vec:
                        result = self._llvm.ir_builder.fmul(
                            llvm_lhs, llvm_rhs, name="vfmultmp"
                        )
                        self._apply_fast_math(result)
                    else:
                        result = self._llvm.ir_builder.mul(
                            llvm_lhs, llvm_rhs, name="vmultmp"
                        )
                elif op == "/":
                    if is_float_vec:
                        result = self._llvm.ir_builder.fdiv(
                            llvm_lhs, llvm_rhs, name="vfdivtmp"
                        )
                        self._apply_fast_math(result)
                    else:
                        unsigned = getattr(node, "unsigned", None)
                        if unsigned is None:
                            raise Exception(
                                "Cannot infer integer division signedness "
                                "for vector op"
                            )
                        result = emit_int_div(
                            self._llvm.ir_builder, llvm_lhs, llvm_rhs, unsigned
                        )
                else:
                    raise Exception(f"Vector binop {op} not implemented.")
            finally:
                if set_fast:
                    self.set_fast_math(False)
            self.result_stack.append(result)
            return

        # Scalar Fallback: Original scalar promotion logic
        llvm_lhs, llvm_rhs = self.promote_operands(llvm_lhs, llvm_rhs)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

def _unify_numeric_operands(
self, lhs: ir.Value, rhs: ir.Value
) -> tuple[ir.Value, ir.Value]:
"""Ensure numeric operands share shape and scalar type."""
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The _unify_numeric_operands method would benefit from more detailed documentation. The current docstring "Ensure numeric operands share shape and scalar type" is minimal. Consider documenting: (1) the promotion rules (e.g., int promotes to float, narrower types promote to wider), (2) parameter types and constraints, (3) return value guarantees, (4) what exceptions can be raised, and (5) examples of transformations. This is a critical function for type safety and clear documentation would help maintainers understand the promotion semantics.

Suggested change
"""Ensure numeric operands share shape and scalar type."""
"""
Normalize two numeric LLVM values to a common scalar type and shape.
This helper is used before emitting arithmetic or comparison
instructions so that both operands are type-compatible. It supports
scalar and vector integers / floating-point values and performs both
scalar type promotion and optional scalar-to-vector splatting.
Promotion rules
---------------
* Shape:
- If both operands are vectors, they must have the same number of
lanes; otherwise an Exception is raised.
- If exactly one operand is a vector, its lane count is used as the
target shape and the scalar operand is splatted to a vector of the
same lane count after type promotion.
- If both operands are scalars, the result operands remain scalars.
* Scalar type:
- If either operand has a floating-point scalar type, both operands
are promoted to a common floating-point type selected via
``self._select_float_type`` from the floating-point candidates.
- If both operands have integer scalar types, both are promoted to an
integer type with ``width = max(lhs.width, rhs.width)`` (at least
1 bit), preserving signedness semantics as implemented by
``_cast_value_to_type``.
Parameters
----------
lhs : llvmlite.ir.Value
Left-hand numeric operand. May be a scalar or vector of integer or
floating-point type.
rhs : llvmlite.ir.Value
Right-hand numeric operand. May be a scalar or vector of integer or
floating-point type.
Returns
-------
(llvmlite.ir.Value, llvmlite.ir.Value)
A pair ``(lhs', rhs')`` where:
* ``lhs'.type`` and ``rhs'.type`` have the same scalar element type.
* If either operand is a vector, both results are vectors with the
same lane count.
Raises
------
Exception
If both operands are vectors and their lane counts (``.count``)
differ.
Any exception raised by ``_cast_value_to_type`` may also propagate if
the operands cannot be safely cast to the selected target type.
Examples
--------
* ``i32 + i64`` -> both operands promoted to ``i64``.
* ``float + i32`` -> both operands promoted to ``float``.
* ``<4 x i16> + i32`` -> scalar ``i32`` cast to ``i32`` then splatted
to ``<4 x i32>`` to match the vector operand.
"""

Copilot uses AI. Check for mistakes.
float_candidates = [
ty for ty in (lhs_base_ty, rhs_base_ty) if is_fp_type(ty)
]
target_scalar_ty = self._select_float_type(float_candidates)
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When mixing integer and floating-point operands, the integer width is not considered when selecting the target float type. For example, an int64 combined with a float32 will promote both to float32, which can cause precision loss since float32 cannot accurately represent all int64 values. Consider promoting to at least double (float64) when the integer operand has width > 32 bits, or document this behavior if the precision loss is acceptable for your use case.

Suggested change
target_scalar_ty = self._select_float_type(float_candidates)
target_scalar_ty = self._select_float_type(float_candidates)
# If we are mixing an integer with a floating-point value, ensure that
# wide integers (> 32 bits) are promoted to at least double precision
# to avoid excessive precision loss when the selected float type is
# narrower than 64 bits.
if lhs_is_float != rhs_is_float:
int_base_ty = lhs_base_ty if not lhs_is_float else rhs_base_ty
int_width = getattr(int_base_ty, "width", 0)
# Determine the bit-width of the selected floating-point type.
float_bits = 0
if isinstance(target_scalar_ty, HalfType):
float_bits = FLOAT16_BITS
elif isinstance(target_scalar_ty, FloatType):
float_bits = FLOAT32_BITS
elif isinstance(target_scalar_ty, DoubleType):
float_bits = FLOAT64_BITS
elif FP128Type is not None and isinstance(target_scalar_ty, FP128Type):
float_bits = FLOAT128_BITS
# Upgrade to double precision when combining a wide integer with
# a float type that is narrower than 64 bits.
if int_width > 32 and float_bits and float_bits < FLOAT64_BITS:
target_scalar_ty = DoubleType()

Copilot uses AI. Check for mistakes.

if lhs_is_vec and rhs_is_vec and lhs.type.count != rhs.type.count:
raise Exception(
f"Vector size mismatch: {lhs.type.count} vs {rhs.type.count}"
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message could be more informative by including the operation context. Instead of just "Vector size mismatch: X vs Y", consider including information about what operation was being attempted (e.g., "Binary operation '+' requires matching vector sizes, but got X vs Y"). This would help developers debug issues more quickly.

Suggested change
f"Vector size mismatch: {lhs.type.count} vs {rhs.type.count}"
"Numeric operation requires matching vector sizes, "
f"but got {lhs.type} (size {lhs.type.count}) vs "
f"{rhs.type} (size {rhs.type.count})"

Copilot uses AI. Check for mistakes.
Comment on lines +468 to +512
def _unify_numeric_operands(
self, lhs: ir.Value, rhs: ir.Value
) -> tuple[ir.Value, ir.Value]:
"""Ensure numeric operands share shape and scalar type."""
lhs_is_vec = is_vector(lhs)
rhs_is_vec = is_vector(rhs)

if lhs_is_vec and rhs_is_vec and lhs.type.count != rhs.type.count:
raise Exception(
f"Vector size mismatch: {lhs.type.count} vs {rhs.type.count}"
)

target_lanes = None
if lhs_is_vec:
target_lanes = lhs.type.count
elif rhs_is_vec:
target_lanes = rhs.type.count

lhs_base_ty = lhs.type.element if lhs_is_vec else lhs.type
rhs_base_ty = rhs.type.element if rhs_is_vec else rhs.type

lhs_is_float = is_fp_type(lhs_base_ty)
rhs_is_float = is_fp_type(rhs_base_ty)

if lhs_is_float or rhs_is_float:
float_candidates = [
ty for ty in (lhs_base_ty, rhs_base_ty) if is_fp_type(ty)
]
target_scalar_ty = self._select_float_type(float_candidates)
else:
lhs_width = getattr(lhs_base_ty, "width", 0)
rhs_width = getattr(rhs_base_ty, "width", 0)
target_scalar_ty = ir.IntType(max(lhs_width, rhs_width, 1))

lhs = self._cast_value_to_type(lhs, target_scalar_ty)
rhs = self._cast_value_to_type(rhs, target_scalar_ty)

if target_lanes:
vec_ty = ir.VectorType(target_scalar_ty, target_lanes)
if not is_vector(lhs):
lhs = splat_scalar(self._llvm.ir_builder, lhs, vec_ty)
if not is_vector(rhs):
rhs = splat_scalar(self._llvm.ir_builder, rhs, vec_ty)

return lhs, rhs
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new unification logic changes the type promotion behavior compared to the removed code. Previously, when combining a float vector with a double scalar, the scalar would be truncated (fptrunc) to match the vector's element type. Now, both are promoted to the wider type (double). This is generally better for precision, but represents a behavior change that could affect existing code relying on the old behavior. Ensure this is intentional and documented, especially since it could impact numerical precision in existing computations.

Copilot uses AI. Check for mistakes.
)

assert is_fp_type(widened_int.type)
assert widened_float.type == visitor._llvm.FLOAT_TYPE
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing blank line before function definition. According to PEP 8, there should be two blank lines before top-level function definitions to maintain consistency with the rest of the file.

Suggested change
assert widened_float.type == visitor._llvm.FLOAT_TYPE
assert widened_float.type == visitor._llvm.FLOAT_TYPE

Copilot uses AI. Check for mistakes.
Comment on lines +101 to +156
def test_unify_promotes_scalar_int_to_vector() -> None:
"""Scalar ints splat to match vector operands and widen width."""
visitor = LLVMLiteIRVisitor()
_prime_builder(visitor)

vec_ty = ir.VectorType(ir.IntType(32), 2)
vec = ir.Constant(vec_ty, [ir.Constant(ir.IntType(32), 1)] * 2)
scalar = ir.Constant(ir.IntType(16), 5)

promoted_vec, promoted_scalar = visitor._unify_numeric_operands(
vec, scalar
)

assert isinstance(promoted_vec.type, ir.VectorType)
assert isinstance(promoted_scalar.type, ir.VectorType)
assert promoted_vec.type == vec_ty
assert promoted_scalar.type == vec_ty


def test_unify_vector_float_rank_matches_double() -> None:
"""Float vectors upgrade to match double scalars."""
visitor = LLVMLiteIRVisitor()
_prime_builder(visitor)

float_vec_ty = ir.VectorType(visitor._llvm.FLOAT_TYPE, 2)
float_vec = ir.Constant(
float_vec_ty,
[
ir.Constant(visitor._llvm.FLOAT_TYPE, 1.0),
ir.Constant(visitor._llvm.FLOAT_TYPE, 2.0),
],
)
double_scalar = ir.Constant(visitor._llvm.DOUBLE_TYPE, 4.0)

widened_vec, widened_scalar = visitor._unify_numeric_operands(
float_vec, double_scalar
)

assert widened_vec.type.element == visitor._llvm.DOUBLE_TYPE
assert widened_scalar.type.element == visitor._llvm.DOUBLE_TYPE


def test_unify_int_and_float_scalars_returns_float() -> None:
"""Scalar int + float promotes to float for both operands."""
visitor = LLVMLiteIRVisitor()
_prime_builder(visitor)

int_scalar = ir.Constant(visitor._llvm.INT32_TYPE, 7)
float_scalar = ir.Constant(visitor._llvm.FLOAT_TYPE, 1.25)

widened_int, widened_float = visitor._unify_numeric_operands(
int_scalar, float_scalar
)

assert is_fp_type(widened_int.type)
assert widened_float.type == visitor._llvm.FLOAT_TYPE
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test coverage is missing several important edge cases for _unify_numeric_operands: (1) two vectors with mismatched element types (e.g., int32 vector vs float vector), (2) truncation scenarios where a wider type needs to be narrowed to match another operand, (3) FP128 type handling if available, (4) error case where vectors have different sizes, and (5) scalar-to-scalar integer promotion with different widths. Consider adding tests for these scenarios to ensure the unification logic handles all cases correctly.

Copilot uses AI. Check for mistakes.
Comment on lines +551 to +555
lanes = value.type.count
current_scalar_ty = value.type.element
target_ty = ir.VectorType(target_scalar_ty, lanes)
else:
lanes = None
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable lanes is not used.

Suggested change
lanes = value.type.count
current_scalar_ty = value.type.element
target_ty = ir.VectorType(target_scalar_ty, lanes)
else:
lanes = None
current_scalar_ty = value.type.element
target_ty = ir.VectorType(target_scalar_ty, value.type.count)
else:

Copilot uses AI. Check for mistakes.
@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.&#x27;, 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

tests/test_llvmlite_helpers.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.&#x27;, 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant