Make CPU SHM pool importable without NumPy and modernize core deps/CI/telemetry#2
Make CPU SHM pool importable without NumPy and modernize core deps/CI/telemetry#2
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 58ab5b38aa
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| from torch.utils.cpp_extension import BuildExtension, CUDAExtension | ||
|
|
||
|
|
||
| def parse_requirements(filename: str) -> list[str]: |
There was a problem hiding this comment.
Fix Python 3.8 annotation that breaks setup import
The new return annotation list[str] requires PEP 585 (Python ≥3.9). Because setup.py is executed at import time during installation, this will raise TypeError: 'type' object is not subscriptable on Python 3.8, even though python_requires='>=3.8' still advertises 3.8 support. Use typing.List[str] or postpone evaluation with from __future__ import annotations to keep 3.8 installs working.
Useful? React with 👍 / 👎.
Motivation
Description
zerolink/core/cpu/shm_pool.pyby usingtry/except ImportErrorand added a runtime guard inget_numpy_arraythat raises a clearRuntimeErrorif NumPy is absent, and replacednp.dtypetyping withAnyto avoid import-time failures.requirements-gpu.txt,requirements-ray.txt,requirements-cgpu.txt,requirements-dev.txt), addednumpy>=1.24.0to corerequirements.txt, and updatedpyproject.tomlandsetup.pyto support extras (gpu,ray,cgpu,dev,full) and to allow installation without Torch/CUDA toolchain..github/workflows/ci.yml) to splittest-core,test-gpu-profile, andtest-ray-profilejobs, pin newer Python versions and run CPU-only tests with--noconftestwhere appropriate.pynexusmetrics tozerolink_*, added new metrics and helper functions (observe_runtime_latency,record_alloc_failure,record_lease_event) inzerolink/monitoring/telemetry.py, and wired telemetry/logging calls into runtime, server and worker code paths (zerolink/runtime/unified.py,zerolink/server/main_server.py,zerolink/workers/gpu_worker.py)./tmp/zerolink.sock,zerolink_cpu_pool), JSON-structured log messages, betterMainServer/MainIPCLeaseManager2Plifecycle and lease bookkeeping, safer handling of missing GPU components, and improved graceful shutdown logic.pynexus->zerolink), defensive parsing of requirements insetup.py, optional CUDA extension build when Torch is available, and tests conftest adjustments to be CPU-safe.Testing
python -m py_compile zerolink/core/cpu/shm_pool.pywhich succeeded.SharedMemoryPool(create pool, allocate two blocks, free both, assert coalescing, cleanup) which printedshm coalescing okand returned successfully.pytest tests/test_protocol.py -qand again withpytest --noconftest tests/test_protocol.py -q, both runs passed (3 passed).Codex Task