Skip to content

Refactor(handler part 19): final facade cleanup post 17/18#628

Merged
ChuxiJ merged 3 commits intoace-step:mainfrom
1larity:feat/handler-final-fd
Feb 18, 2026
Merged

Refactor(handler part 19): final facade cleanup post 17/18#628
ChuxiJ merged 3 commits intoace-step:mainfrom
1larity:feat/handler-final-fd

Conversation

@1larity
Copy link
Contributor

@1larity 1larity commented Feb 17, 2026

Summary

Final handler decomposition cleanup after parts 17 and 18.

This PR keeps AceStepHandler as a thin facade and applies only minimal incremental cleanup on top of the part-17/part-18 state.

Dependency

Scope

  • In scope:
    • acestep/handler.py
    • acestep/core/generation/handler/mlx_vae_native_test.py
    • acestep/core/generation/handler/generate_music_payload_test.py
    • acestep/core/generation/handler/mlx_dit_init.py
    • acestep/core/generation/handler/mlx_dit_init_test.py
    • acestep/core/generation/handler/mlx_vae_init_test.py
  • Out of scope:
    • runtime behavior changes
    • API/signature changes
    • non-target refactors

Latest Updates

  • b62d98a: trim now-unused facade imports and keep MLX native test module within LOC hard cap (<=200).
  • 6d659fe: Coderabbit follow-up fixes:
    • add edge-case payload test for missing optional outputs with progress=None
    • rename unused lambda params to avoid ARG005 lint noise
    • annotate intentional broad non-fatal init catch with # noqa: BLE001
  • d69e074: Coderabbit follow-up for mlx_vae_init_test.py:
    • add helper return type hints and richer helper docs
    • avoid persistent module-stub state by restoring sys.modules entries in finally
    • replace generic compile failure raise path with local CompileError for TRY003 compliance
    • keep file under LOC hard cap (191)

Behavior

  • No functional behavior changes intended.
  • Facade remains composition-based; changes are import hygiene, test coverage hardening, and lint/test compliance.

Validation

  • python acestep/core/generation/handler/service_generate_test.py
  • python acestep/core/generation/handler/generate_music_test.py
  • python acestep/core/generation/handler/generate_music_decode_test.py
  • python acestep/core/generation/handler/generate_music_payload_test.py
  • python acestep/core/generation/handler/mlx_dit_init_test.py
  • python acestep/core/generation/handler/mlx_vae_init_test.py
  • python acestep/core/generation/handler/mlx_vae_native_test.py

Notes

Summary by CodeRabbit

  • Tests

    • Enhanced test coverage for optional output handling and module loading robustness.
  • Chores

    • Code quality improvements including formatting cleanup and linting adjustments.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 17, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

This PR adds test methods for music payload generation, improves test infrastructure for MLX VAE initialization module loading with better error handling and module isolation, makes minor formatting adjustments to exception handling, and removes blank lines from test files.

Changes

Cohort / File(s) Summary
Music Payload Test
acestep/core/generation/handler/generate_music_payload_test.py
Adds a new test method test_build_success_payload_handles_missing_optional_outputs_without_progress to validate payload construction when optional outputs are absent and progress callback is None.
MLX Module Init Tests
acestep/core/generation/handler/mlx_dit_init_test.py, acestep/core/generation/handler/mlx_vae_init_test.py
Adds MlxVaeInitMixinTests public test class; improves _load_handler_module with robust module loading, sys.modules isolation/cleanup, error handling, and extended docstrings; enhances test scaffolding with fake mlx.core helpers (CompileError tracking, compile simulation).
Code Quality
acestep/core/generation/handler/mlx_dit_init.py, acestep/core/generation/handler/mlx_vae_native_test.py
Minor noqa annotation for exception handling and removal of unnecessary blank lines.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • ChuxiJ

Poem

🐰 Tests hop and skip with careful tread,
Module loaders cleaned up ahead,
Payloads crafted without a care,
Error handling—robust and fair!
Quality blooms with each refine,
The codebase shines, line by line! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately describes the change as a final handler cleanup refactor following parts 17 and 18, which aligns with the code changes showing composition-based facade improvements and mixin reorganization.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (6)
acestep/core/generation/handler/mlx_vae_encode_native.py (2)

47-48: Consider extracting magic numbers as module-level constants.

The chunk size (48000 * 30) and overlap (48000 * 2) appear in both _mlx_vae_encode_sample and _mlx_encode_single. Extracting these as named constants would improve readability and maintainability.

♻️ Optional: Extract constants
# At module level
_MLX_ENCODE_CHUNK_SAMPLES = 48000 * 30  # 30 seconds at 48kHz
_MLX_ENCODE_OVERLAP_SAMPLES = 48000 * 2  # 2 seconds overlap
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/core/generation/handler/mlx_vae_encode_native.py` around lines 47 -
48, Extract the repeated magic numbers into descriptive module-level constants
and replace their inline uses in both _mlx_vae_encode_sample and
_mlx_encode_single: introduce e.g. _MLX_ENCODE_CHUNK_SAMPLES = 48000 * 30 and
_MLX_ENCODE_OVERLAP_SAMPLES = 48000 * 2 at top of the module, then update
occurrences of 48000 * 30 and 48000 * 2 inside _mlx_vae_encode_sample and
_mlx_encode_single to reference these constants to improve readability and
maintainability.

133-137: Remove redundant int() calls around round().

In Python 3, round() already returns an integer when called with a single argument. The int() wrapper is unnecessary.

♻️ Proposed fix
-            trim_start = int(round((core_start - win_start) / downsample_factor))
-            trim_end = int(round((win_end - core_end) / downsample_factor))
+            trim_start = round((core_start - win_start) / downsample_factor)
+            trim_end = round((win_end - core_end) / downsample_factor)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/core/generation/handler/mlx_vae_encode_native.py` around lines 133 -
137, Remove the redundant int() wrappers around round() when computing
trim_start and trim_end in mlx_vae_encode_native.py: replace trim_start =
int(round((core_start - win_start) / downsample_factor)) and trim_end =
int(round((win_end - core_end) / downsample_factor)) with calls that just use
round(...) (e.g., trim_start = round((core_start - win_start) /
downsample_factor) and trim_end = round((win_end - core_end) /
downsample_factor)); keep the subsequent logic that computes latent_len,
end_idx, and appends latent_chunk[:, trim_start:end_idx, :] unchanged.
acestep/core/generation/handler/mlx_vae_decode_native.py (2)

90-91: Consider extracting magic numbers as module-level constants.

Similar to the encode file, the chunk size (2048) and overlap (64) could be named constants for clarity.

♻️ Optional: Extract constants
# At module level
_MLX_DECODE_CHUNK_FRAMES = 2048
_MLX_DECODE_OVERLAP_FRAMES = 64
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/core/generation/handler/mlx_vae_decode_native.py` around lines 90 -
91, Extract the magic numbers used for decoding by replacing local variables
mlx_chunk and mlx_overlap in mlx_vae_decode_native.py with module-level
constants (e.g., _MLX_DECODE_CHUNK_FRAMES = 2048 and _MLX_DECODE_OVERLAP_FRAMES
= 64) and update all references in functions that use mlx_chunk / mlx_overlap
accordingly so the values are named and centralized; ensure the constants are
defined at the top of the module and referenced where mlx_chunk and mlx_overlap
are currently set/used.

113-117: Remove redundant int() calls around round().

Same issue as in the encode file - round() already returns an integer in Python 3.

♻️ Proposed fix
-            trim_start = int(round((core_start - win_start) * upsample_factor))
-            trim_end = int(round((win_end - core_end) * upsample_factor))
+            trim_start = round((core_start - win_start) * upsample_factor)
+            trim_end = round((win_end - core_end) * upsample_factor)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/core/generation/handler/mlx_vae_decode_native.py` around lines 113 -
117, The expressions computing trim_start and trim_end use redundant
int(round(...)) calls; update the calculations in mlx_vae_decode_native.py to
remove the outer int() and just use round(...) for trim_start and trim_end
(i.e., set trim_start = round((core_start - win_start) * upsample_factor) and
trim_end = round((win_end - core_end) * upsample_factor)) so subsequent indexing
(end_idx calculation and decoded_parts.append(audio_chunk[:, trim_start:end_idx,
:])) works with the integer results returned by round().
acestep/core/generation/handler/generate_music_payload.py (1)

39-83: Consider consolidating the two audio tensor loops.

The code iterates over actual_batch_size twice: first to build audio_tensors (lines 39-42), then again to build audios (lines 81-83). These could be merged into a single loop for slight efficiency improvement.

♻️ Optional: Consolidate loops
-        audio_tensors = []
-        for index in range(actual_batch_size):
-            audio_tensor = pred_wavs[index].cpu()
-            audio_tensors.append(audio_tensor)
+        audios = []
+        for index in range(actual_batch_size):
+            audio_tensor = pred_wavs[index].cpu()
+            audios.append({"tensor": audio_tensor, "sample_rate": self.sample_rate})
 
         status_message = "Generation completed successfully!"
-        logger.info(f"[generate_music] Done! Generated {len(audio_tensors)} audio tensors.")
+        logger.info(f"[generate_music] Done! Generated {len(audios)} audio tensors.")
         # ... extra_outputs construction ...
-
-        audios = []
-        for audio_tensor in audio_tensors:
-            audios.append({"tensor": audio_tensor, "sample_rate": self.sample_rate})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/core/generation/handler/generate_music_payload.py` around lines 39 -
83, The code creates audio_tensors in a loop over actual_batch_size and then
immediately iterates them again to build audios; merge these into a single loop
that iterates over range(actual_batch_size) (or enumerate(pred_wavs)) and for
each index grabs pred_wavs[index].cpu(), appends that tensor to audio_tensors
(if needed) and simultaneously appends the corresponding dict to audios with
"tensor" and self.sample_rate; update references to pred_wavs, audio_tensors,
audios, actual_batch_size and remove the redundant second loop to keep behavior
identical.
acestep/core/generation/handler/generate_music_decode.py (1)

177-179: Verify vae_device is always defined before use.

When vae_cpu is True, the code restores VAE to vae_device (line 179). However, vae_device is only assigned inside the if vae_cpu: block (line 156). This should be safe since the condition is the same, but the scoping could be clearer.

♻️ Optional: Initialize vae_device earlier for clarity
                 using_mlx_vae = self.use_mlx_vae and self.mlx_vae is not None
-                vae_cpu = False
+                vae_cpu = False
+                vae_device = None  # Will be set if vae_cpu becomes True
                 if not using_mlx_vae:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/core/generation/handler/generate_music_decode.py` around lines 177 -
179, The restore-to-GPU call uses vae_device without guaranteeing it's set in
the current scope; ensure vae_device is defined before any conditional use by
moving its assignment out of the inner block (or initialize it to a default)
where vae_cpu is computed. Concretely, in the generate_music_decode flow
compute/set vae_device (the target device for self.vae) before the vae_cpu
branch that may move the VAE to CPU, then use that vae_device when calling
self.vae.to(vae_device) and in the logger.info message to avoid any scoping
ambiguity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@acestep/core/generation/handler/generate_music_payload_test.py`:
- Around line 66-109: Add a new test in GenerateMusicPayloadMixinTests that
covers an edge/non‑target case for _build_generate_music_success_payload: call
host._build_generate_music_success_payload with optional outputs omitted (e.g.,
remove keys like "spans" or "encoder_hidden_states"), pass progress=None, and
assert the payload still returns success/error fields and that absent optional
keys are handled (no exceptions) and pred_latents is on CPU; also include
assertions that no progress updates occur when progress is None.

In `@acestep/core/generation/handler/mlx_dit_init_test.py`:
- Around line 74-78: Rename the unused classmethod parameter in the fake decoder
definition to silence ARG005: in the test where fake_dit_model.MLXDiTDecoder is
created via type(...) change the classmethod lambda parameter name from cls to
_cls (or _) so the parameter is clearly unused (i.e., {"from_config":
classmethod(lambda _cls, _cfg: object())}) to avoid lint noise while keeping
behavior unchanged.

In `@acestep/core/generation/handler/mlx_dit_init.py`:
- Around line 18-39: The broad exception catch around MLX initialization (the
try/except that wraps MLXDiTDecoder.from_config and convert_and_load and sets
self.mlx_decoder / self.use_mlx_dit / self.mlx_dit_compiled) should either be
narrowed to explicit Exception types (e.g., except (ImportError, RuntimeError,
ValueError) as exc) to avoid a bare/overbroad catch, or if swallowing all init
failures is intentional keep the current except Exception as exc but add an
explicit lint suppression comment for BLE001 (e.g., append "# noqa: BLE001" or
the project’s bandit/flake rule suppression) on the except line so the linter is
satisfied without changing behavior.

In `@acestep/core/generation/handler/mlx_vae_init_test.py`:
- Around line 127-129: The fake module’s lambda for fake_utils.tree_map
currently defines an unused parameter `fn` which triggers Ruff ARG005; update
both occurrences (the lambda assigned to fake_utils.tree_map in the test file
and the second similar lambda around lines 158-159) to rename the unused
parameter to `_fn` (or `_`) so the linter ignores it while keeping behavior
unchanged.

---

Nitpick comments:
In `@acestep/core/generation/handler/generate_music_decode.py`:
- Around line 177-179: The restore-to-GPU call uses vae_device without
guaranteeing it's set in the current scope; ensure vae_device is defined before
any conditional use by moving its assignment out of the inner block (or
initialize it to a default) where vae_cpu is computed. Concretely, in the
generate_music_decode flow compute/set vae_device (the target device for
self.vae) before the vae_cpu branch that may move the VAE to CPU, then use that
vae_device when calling self.vae.to(vae_device) and in the logger.info message
to avoid any scoping ambiguity.

In `@acestep/core/generation/handler/generate_music_payload.py`:
- Around line 39-83: The code creates audio_tensors in a loop over
actual_batch_size and then immediately iterates them again to build audios;
merge these into a single loop that iterates over range(actual_batch_size) (or
enumerate(pred_wavs)) and for each index grabs pred_wavs[index].cpu(), appends
that tensor to audio_tensors (if needed) and simultaneously appends the
corresponding dict to audios with "tensor" and self.sample_rate; update
references to pred_wavs, audio_tensors, audios, actual_batch_size and remove the
redundant second loop to keep behavior identical.

In `@acestep/core/generation/handler/mlx_vae_decode_native.py`:
- Around line 90-91: Extract the magic numbers used for decoding by replacing
local variables mlx_chunk and mlx_overlap in mlx_vae_decode_native.py with
module-level constants (e.g., _MLX_DECODE_CHUNK_FRAMES = 2048 and
_MLX_DECODE_OVERLAP_FRAMES = 64) and update all references in functions that use
mlx_chunk / mlx_overlap accordingly so the values are named and centralized;
ensure the constants are defined at the top of the module and referenced where
mlx_chunk and mlx_overlap are currently set/used.
- Around line 113-117: The expressions computing trim_start and trim_end use
redundant int(round(...)) calls; update the calculations in
mlx_vae_decode_native.py to remove the outer int() and just use round(...) for
trim_start and trim_end (i.e., set trim_start = round((core_start - win_start) *
upsample_factor) and trim_end = round((win_end - core_end) * upsample_factor))
so subsequent indexing (end_idx calculation and
decoded_parts.append(audio_chunk[:, trim_start:end_idx, :])) works with the
integer results returned by round().

In `@acestep/core/generation/handler/mlx_vae_encode_native.py`:
- Around line 47-48: Extract the repeated magic numbers into descriptive
module-level constants and replace their inline uses in both
_mlx_vae_encode_sample and _mlx_encode_single: introduce e.g.
_MLX_ENCODE_CHUNK_SAMPLES = 48000 * 30 and _MLX_ENCODE_OVERLAP_SAMPLES = 48000 *
2 at top of the module, then update occurrences of 48000 * 30 and 48000 * 2
inside _mlx_vae_encode_sample and _mlx_encode_single to reference these
constants to improve readability and maintainability.
- Around line 133-137: Remove the redundant int() wrappers around round() when
computing trim_start and trim_end in mlx_vae_encode_native.py: replace
trim_start = int(round((core_start - win_start) / downsample_factor)) and
trim_end = int(round((win_end - core_end) / downsample_factor)) with calls that
just use round(...) (e.g., trim_start = round((core_start - win_start) /
downsample_factor) and trim_end = round((win_end - core_end) /
downsample_factor)); keep the subsequent logic that computes latent_len,
end_idx, and appends latent_chunk[:, trim_start:end_idx, :] unchanged.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
acestep/core/generation/handler/generate_music_payload_test.py (1)

17-51: Module loading helper is well-documented and isolates test dependencies.

The approach of manually registering parent packages before loading the target module is a valid pattern for isolated unit testing. Docstring covers purpose, return, and exceptions.

Optional: consider adding an explicit return type hint for tooling support.

def _load_generate_music_payload_module() -> types.ModuleType:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/core/generation/handler/generate_music_payload_test.py` around lines
17 - 51, Add an explicit return type annotation to the helper function
_load_generate_music_payload_module so its signature reads with ->
types.ModuleType; ensure the module-level import for the types symbol exists
(i.e., import types) so the annotation resolves. Update only the function
signature and, if missing, add the types import near other imports to avoid
runtime/name errors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@acestep/core/generation/handler/mlx_vae_init_test.py`:
- Around line 12-13: The helper _load_handler_module currently lacks a return
type annotation and a detailed docstring; add a return type hint (e.g., ->
types.ModuleType or -> Any) to the function signature and update the docstring
to briefly state the purpose, list inputs (filename: str, module_name: str),
describe the return value (loaded module type) and note any exceptions raised
(e.g., FileNotFoundError, ImportError, SyntaxError). If using types.ModuleType,
import ModuleType from types at top of the file and update the signature to
_load_handler_module(filename: str, module_name: str) -> ModuleType and include
a one-line summary plus "Args:" and "Returns:" (and optionally "Raises:") in the
docstring.
- Around line 80-99: Update _build_fake_mx_core to avoid TRY003 by defining and
raising a local exception class (e.g., CompileError) instead of raise
RuntimeError("compile failed"); add precise type hints for the function
signature and return type (e.g., -> Tuple[ModuleType, Dict[str, int]]), and
expand the docstring to briefly describe purpose, parameters (raise_compile),
return values (fake_mx_core, calls) and the exception raised when raise_compile
is True; keep the helper logic intact and reference the _compile inner function
and calls dict in the docstring.

---

Nitpick comments:
In `@acestep/core/generation/handler/generate_music_payload_test.py`:
- Around line 17-51: Add an explicit return type annotation to the helper
function _load_generate_music_payload_module so its signature reads with ->
types.ModuleType; ensure the module-level import for the types symbol exists
(i.e., import types) so the annotation resolves. Update only the function
signature and, if missing, add the types import near other imports to avoid
runtime/name errors.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@acestep/core/generation/handler/mlx_vae_init_test.py`:
- Line 113: The raise uses a message string which triggers TRY003 despite the
custom exception; update the raise to use the exception class alone by changing
"raise CompileError(\"compile failed\")" to "raise CompileError" (ensure the
CompileError class defined in this module remains unchanged and in scope so the
bare raise references that exception type).

@1larity 1larity force-pushed the feat/handler-final-fd branch from d69e074 to d60635b Compare February 18, 2026 09:52
@ChuxiJ ChuxiJ merged commit 816825b into ace-step:main Feb 18, 2026
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments