Refactor(handler part 19): final facade cleanup post 17/18#628
Refactor(handler part 19): final facade cleanup post 17/18#628ChuxiJ merged 3 commits intoace-step:mainfrom
Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughThis PR adds test methods for music payload generation, improves test infrastructure for MLX VAE initialization module loading with better error handling and module isolation, makes minor formatting adjustments to exception handling, and removes blank lines from test files. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (6)
acestep/core/generation/handler/mlx_vae_encode_native.py (2)
47-48: Consider extracting magic numbers as module-level constants.The chunk size (
48000 * 30) and overlap (48000 * 2) appear in both_mlx_vae_encode_sampleand_mlx_encode_single. Extracting these as named constants would improve readability and maintainability.♻️ Optional: Extract constants
# At module level _MLX_ENCODE_CHUNK_SAMPLES = 48000 * 30 # 30 seconds at 48kHz _MLX_ENCODE_OVERLAP_SAMPLES = 48000 * 2 # 2 seconds overlap🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@acestep/core/generation/handler/mlx_vae_encode_native.py` around lines 47 - 48, Extract the repeated magic numbers into descriptive module-level constants and replace their inline uses in both _mlx_vae_encode_sample and _mlx_encode_single: introduce e.g. _MLX_ENCODE_CHUNK_SAMPLES = 48000 * 30 and _MLX_ENCODE_OVERLAP_SAMPLES = 48000 * 2 at top of the module, then update occurrences of 48000 * 30 and 48000 * 2 inside _mlx_vae_encode_sample and _mlx_encode_single to reference these constants to improve readability and maintainability.
133-137: Remove redundantint()calls aroundround().In Python 3,
round()already returns an integer when called with a single argument. Theint()wrapper is unnecessary.♻️ Proposed fix
- trim_start = int(round((core_start - win_start) / downsample_factor)) - trim_end = int(round((win_end - core_end) / downsample_factor)) + trim_start = round((core_start - win_start) / downsample_factor) + trim_end = round((win_end - core_end) / downsample_factor)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@acestep/core/generation/handler/mlx_vae_encode_native.py` around lines 133 - 137, Remove the redundant int() wrappers around round() when computing trim_start and trim_end in mlx_vae_encode_native.py: replace trim_start = int(round((core_start - win_start) / downsample_factor)) and trim_end = int(round((win_end - core_end) / downsample_factor)) with calls that just use round(...) (e.g., trim_start = round((core_start - win_start) / downsample_factor) and trim_end = round((win_end - core_end) / downsample_factor)); keep the subsequent logic that computes latent_len, end_idx, and appends latent_chunk[:, trim_start:end_idx, :] unchanged.acestep/core/generation/handler/mlx_vae_decode_native.py (2)
90-91: Consider extracting magic numbers as module-level constants.Similar to the encode file, the chunk size (2048) and overlap (64) could be named constants for clarity.
♻️ Optional: Extract constants
# At module level _MLX_DECODE_CHUNK_FRAMES = 2048 _MLX_DECODE_OVERLAP_FRAMES = 64🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@acestep/core/generation/handler/mlx_vae_decode_native.py` around lines 90 - 91, Extract the magic numbers used for decoding by replacing local variables mlx_chunk and mlx_overlap in mlx_vae_decode_native.py with module-level constants (e.g., _MLX_DECODE_CHUNK_FRAMES = 2048 and _MLX_DECODE_OVERLAP_FRAMES = 64) and update all references in functions that use mlx_chunk / mlx_overlap accordingly so the values are named and centralized; ensure the constants are defined at the top of the module and referenced where mlx_chunk and mlx_overlap are currently set/used.
113-117: Remove redundantint()calls aroundround().Same issue as in the encode file -
round()already returns an integer in Python 3.♻️ Proposed fix
- trim_start = int(round((core_start - win_start) * upsample_factor)) - trim_end = int(round((win_end - core_end) * upsample_factor)) + trim_start = round((core_start - win_start) * upsample_factor) + trim_end = round((win_end - core_end) * upsample_factor)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@acestep/core/generation/handler/mlx_vae_decode_native.py` around lines 113 - 117, The expressions computing trim_start and trim_end use redundant int(round(...)) calls; update the calculations in mlx_vae_decode_native.py to remove the outer int() and just use round(...) for trim_start and trim_end (i.e., set trim_start = round((core_start - win_start) * upsample_factor) and trim_end = round((win_end - core_end) * upsample_factor)) so subsequent indexing (end_idx calculation and decoded_parts.append(audio_chunk[:, trim_start:end_idx, :])) works with the integer results returned by round().acestep/core/generation/handler/generate_music_payload.py (1)
39-83: Consider consolidating the two audio tensor loops.The code iterates over
actual_batch_sizetwice: first to buildaudio_tensors(lines 39-42), then again to buildaudios(lines 81-83). These could be merged into a single loop for slight efficiency improvement.♻️ Optional: Consolidate loops
- audio_tensors = [] - for index in range(actual_batch_size): - audio_tensor = pred_wavs[index].cpu() - audio_tensors.append(audio_tensor) + audios = [] + for index in range(actual_batch_size): + audio_tensor = pred_wavs[index].cpu() + audios.append({"tensor": audio_tensor, "sample_rate": self.sample_rate}) status_message = "Generation completed successfully!" - logger.info(f"[generate_music] Done! Generated {len(audio_tensors)} audio tensors.") + logger.info(f"[generate_music] Done! Generated {len(audios)} audio tensors.") # ... extra_outputs construction ... - - audios = [] - for audio_tensor in audio_tensors: - audios.append({"tensor": audio_tensor, "sample_rate": self.sample_rate})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@acestep/core/generation/handler/generate_music_payload.py` around lines 39 - 83, The code creates audio_tensors in a loop over actual_batch_size and then immediately iterates them again to build audios; merge these into a single loop that iterates over range(actual_batch_size) (or enumerate(pred_wavs)) and for each index grabs pred_wavs[index].cpu(), appends that tensor to audio_tensors (if needed) and simultaneously appends the corresponding dict to audios with "tensor" and self.sample_rate; update references to pred_wavs, audio_tensors, audios, actual_batch_size and remove the redundant second loop to keep behavior identical.acestep/core/generation/handler/generate_music_decode.py (1)
177-179: Verifyvae_deviceis always defined before use.When
vae_cpuis True, the code restores VAE tovae_device(line 179). However,vae_deviceis only assigned inside theif vae_cpu:block (line 156). This should be safe since the condition is the same, but the scoping could be clearer.♻️ Optional: Initialize vae_device earlier for clarity
using_mlx_vae = self.use_mlx_vae and self.mlx_vae is not None - vae_cpu = False + vae_cpu = False + vae_device = None # Will be set if vae_cpu becomes True if not using_mlx_vae:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@acestep/core/generation/handler/generate_music_decode.py` around lines 177 - 179, The restore-to-GPU call uses vae_device without guaranteeing it's set in the current scope; ensure vae_device is defined before any conditional use by moving its assignment out of the inner block (or initialize it to a default) where vae_cpu is computed. Concretely, in the generate_music_decode flow compute/set vae_device (the target device for self.vae) before the vae_cpu branch that may move the VAE to CPU, then use that vae_device when calling self.vae.to(vae_device) and in the logger.info message to avoid any scoping ambiguity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@acestep/core/generation/handler/generate_music_payload_test.py`:
- Around line 66-109: Add a new test in GenerateMusicPayloadMixinTests that
covers an edge/non‑target case for _build_generate_music_success_payload: call
host._build_generate_music_success_payload with optional outputs omitted (e.g.,
remove keys like "spans" or "encoder_hidden_states"), pass progress=None, and
assert the payload still returns success/error fields and that absent optional
keys are handled (no exceptions) and pred_latents is on CPU; also include
assertions that no progress updates occur when progress is None.
In `@acestep/core/generation/handler/mlx_dit_init_test.py`:
- Around line 74-78: Rename the unused classmethod parameter in the fake decoder
definition to silence ARG005: in the test where fake_dit_model.MLXDiTDecoder is
created via type(...) change the classmethod lambda parameter name from cls to
_cls (or _) so the parameter is clearly unused (i.e., {"from_config":
classmethod(lambda _cls, _cfg: object())}) to avoid lint noise while keeping
behavior unchanged.
In `@acestep/core/generation/handler/mlx_dit_init.py`:
- Around line 18-39: The broad exception catch around MLX initialization (the
try/except that wraps MLXDiTDecoder.from_config and convert_and_load and sets
self.mlx_decoder / self.use_mlx_dit / self.mlx_dit_compiled) should either be
narrowed to explicit Exception types (e.g., except (ImportError, RuntimeError,
ValueError) as exc) to avoid a bare/overbroad catch, or if swallowing all init
failures is intentional keep the current except Exception as exc but add an
explicit lint suppression comment for BLE001 (e.g., append "# noqa: BLE001" or
the project’s bandit/flake rule suppression) on the except line so the linter is
satisfied without changing behavior.
In `@acestep/core/generation/handler/mlx_vae_init_test.py`:
- Around line 127-129: The fake module’s lambda for fake_utils.tree_map
currently defines an unused parameter `fn` which triggers Ruff ARG005; update
both occurrences (the lambda assigned to fake_utils.tree_map in the test file
and the second similar lambda around lines 158-159) to rename the unused
parameter to `_fn` (or `_`) so the linter ignores it while keeping behavior
unchanged.
---
Nitpick comments:
In `@acestep/core/generation/handler/generate_music_decode.py`:
- Around line 177-179: The restore-to-GPU call uses vae_device without
guaranteeing it's set in the current scope; ensure vae_device is defined before
any conditional use by moving its assignment out of the inner block (or
initialize it to a default) where vae_cpu is computed. Concretely, in the
generate_music_decode flow compute/set vae_device (the target device for
self.vae) before the vae_cpu branch that may move the VAE to CPU, then use that
vae_device when calling self.vae.to(vae_device) and in the logger.info message
to avoid any scoping ambiguity.
In `@acestep/core/generation/handler/generate_music_payload.py`:
- Around line 39-83: The code creates audio_tensors in a loop over
actual_batch_size and then immediately iterates them again to build audios;
merge these into a single loop that iterates over range(actual_batch_size) (or
enumerate(pred_wavs)) and for each index grabs pred_wavs[index].cpu(), appends
that tensor to audio_tensors (if needed) and simultaneously appends the
corresponding dict to audios with "tensor" and self.sample_rate; update
references to pred_wavs, audio_tensors, audios, actual_batch_size and remove the
redundant second loop to keep behavior identical.
In `@acestep/core/generation/handler/mlx_vae_decode_native.py`:
- Around line 90-91: Extract the magic numbers used for decoding by replacing
local variables mlx_chunk and mlx_overlap in mlx_vae_decode_native.py with
module-level constants (e.g., _MLX_DECODE_CHUNK_FRAMES = 2048 and
_MLX_DECODE_OVERLAP_FRAMES = 64) and update all references in functions that use
mlx_chunk / mlx_overlap accordingly so the values are named and centralized;
ensure the constants are defined at the top of the module and referenced where
mlx_chunk and mlx_overlap are currently set/used.
- Around line 113-117: The expressions computing trim_start and trim_end use
redundant int(round(...)) calls; update the calculations in
mlx_vae_decode_native.py to remove the outer int() and just use round(...) for
trim_start and trim_end (i.e., set trim_start = round((core_start - win_start) *
upsample_factor) and trim_end = round((win_end - core_end) * upsample_factor))
so subsequent indexing (end_idx calculation and
decoded_parts.append(audio_chunk[:, trim_start:end_idx, :])) works with the
integer results returned by round().
In `@acestep/core/generation/handler/mlx_vae_encode_native.py`:
- Around line 47-48: Extract the repeated magic numbers into descriptive
module-level constants and replace their inline uses in both
_mlx_vae_encode_sample and _mlx_encode_single: introduce e.g.
_MLX_ENCODE_CHUNK_SAMPLES = 48000 * 30 and _MLX_ENCODE_OVERLAP_SAMPLES = 48000 *
2 at top of the module, then update occurrences of 48000 * 30 and 48000 * 2
inside _mlx_vae_encode_sample and _mlx_encode_single to reference these
constants to improve readability and maintainability.
- Around line 133-137: Remove the redundant int() wrappers around round() when
computing trim_start and trim_end in mlx_vae_encode_native.py: replace
trim_start = int(round((core_start - win_start) / downsample_factor)) and
trim_end = int(round((win_end - core_end) / downsample_factor)) with calls that
just use round(...) (e.g., trim_start = round((core_start - win_start) /
downsample_factor) and trim_end = round((win_end - core_end) /
downsample_factor)); keep the subsequent logic that computes latent_len,
end_idx, and appends latent_chunk[:, trim_start:end_idx, :] unchanged.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
acestep/core/generation/handler/generate_music_payload_test.py (1)
17-51: Module loading helper is well-documented and isolates test dependencies.The approach of manually registering parent packages before loading the target module is a valid pattern for isolated unit testing. Docstring covers purpose, return, and exceptions.
Optional: consider adding an explicit return type hint for tooling support.
def _load_generate_music_payload_module() -> types.ModuleType:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@acestep/core/generation/handler/generate_music_payload_test.py` around lines 17 - 51, Add an explicit return type annotation to the helper function _load_generate_music_payload_module so its signature reads with -> types.ModuleType; ensure the module-level import for the types symbol exists (i.e., import types) so the annotation resolves. Update only the function signature and, if missing, add the types import near other imports to avoid runtime/name errors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@acestep/core/generation/handler/mlx_vae_init_test.py`:
- Around line 12-13: The helper _load_handler_module currently lacks a return
type annotation and a detailed docstring; add a return type hint (e.g., ->
types.ModuleType or -> Any) to the function signature and update the docstring
to briefly state the purpose, list inputs (filename: str, module_name: str),
describe the return value (loaded module type) and note any exceptions raised
(e.g., FileNotFoundError, ImportError, SyntaxError). If using types.ModuleType,
import ModuleType from types at top of the file and update the signature to
_load_handler_module(filename: str, module_name: str) -> ModuleType and include
a one-line summary plus "Args:" and "Returns:" (and optionally "Raises:") in the
docstring.
- Around line 80-99: Update _build_fake_mx_core to avoid TRY003 by defining and
raising a local exception class (e.g., CompileError) instead of raise
RuntimeError("compile failed"); add precise type hints for the function
signature and return type (e.g., -> Tuple[ModuleType, Dict[str, int]]), and
expand the docstring to briefly describe purpose, parameters (raise_compile),
return values (fake_mx_core, calls) and the exception raised when raise_compile
is True; keep the helper logic intact and reference the _compile inner function
and calls dict in the docstring.
---
Nitpick comments:
In `@acestep/core/generation/handler/generate_music_payload_test.py`:
- Around line 17-51: Add an explicit return type annotation to the helper
function _load_generate_music_payload_module so its signature reads with ->
types.ModuleType; ensure the module-level import for the types symbol exists
(i.e., import types) so the annotation resolves. Update only the function
signature and, if missing, add the types import near other imports to avoid
runtime/name errors.
There was a problem hiding this comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@acestep/core/generation/handler/mlx_vae_init_test.py`:
- Line 113: The raise uses a message string which triggers TRY003 despite the
custom exception; update the raise to use the exception class alone by changing
"raise CompileError(\"compile failed\")" to "raise CompileError" (ensure the
CompileError class defined in this module remains unchanged and in scope so the
bare raise references that exception type).
d69e074 to
d60635b
Compare
Summary
Final handler decomposition cleanup after parts 17 and 18.
This PR keeps
AceStepHandleras a thin facade and applies only minimal incremental cleanup on top of the part-17/part-18 state.Dependency
Refactor(handler part 17): extract MLX helper mixins)Refactor(handler): decompose generate_music orchestration (part 18))Scope
acestep/handler.pyacestep/core/generation/handler/mlx_vae_native_test.pyacestep/core/generation/handler/generate_music_payload_test.pyacestep/core/generation/handler/mlx_dit_init.pyacestep/core/generation/handler/mlx_dit_init_test.pyacestep/core/generation/handler/mlx_vae_init_test.pyLatest Updates
b62d98a: trim now-unused facade imports and keep MLX native test module within LOC hard cap (<=200).6d659fe: Coderabbit follow-up fixes:progress=None# noqa: BLE001d69e074: Coderabbit follow-up formlx_vae_init_test.py:sys.modulesentries infinallyCompileErrorfor TRY003 compliance191)Behavior
Validation
python acestep/core/generation/handler/service_generate_test.pypython acestep/core/generation/handler/generate_music_test.pypython acestep/core/generation/handler/generate_music_decode_test.pypython acestep/core/generation/handler/generate_music_payload_test.pypython acestep/core/generation/handler/mlx_dit_init_test.pypython acestep/core/generation/handler/mlx_vae_init_test.pypython acestep/core/generation/handler/mlx_vae_native_test.pyNotes
Summary by CodeRabbit
Tests
Chores