Skip to content

Conversation

@dependabot
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Oct 7, 2025

Bumps the pip group with 3 updates in the / directory: ray, torch and vllm.
Bumps the pip group with 1 update in the /runner/helix-diffusers directory: torch.
Bumps the pip group with 1 update in the /scripts/knowledge directory: tqdm.

Updates ray from 2.48.0 to 2.49.2

Release notes

Sourced from ray's releases.

Ray-2.49.2

There is no difference between 2.49.2 and 2.49.1, though we needed a patch version for other out of band reasons. To fill the awkward blankness, here is a haiku about Ray:

Summit drawing near Ray advances, step by step Scaling without end

Ray-2.49.1

  • Ray Dashboard: Fix issue where GPU metrics are missing (#56006)
  • Ray Data: Fixed regression in handling very large schemas (#56058)

Ray-2.49.0

Release Highlights

Ray Data:

  • We’ve implemented a variety of performance enhancements, including improved actor/node autoscaling with budget-aware decisions; faster/more accurate shuffle accounting; reduced Parquet metadata footprint; and out-of-order execution for higher throughput.
  • We’ve also implemented anti/semi joins, stratified train_test_split, and added Snowflake connectors.

Ray Core:

  • Performance/robustness cleanups around GCS publish path and raylet internals; simpler OpenTelemetry flagging; new user-facing API to wait for GPU tensor free; plus assorted test/infra tidy-ups

Ray Train:

  • We’ve introduced a new JaxTrainer with SPMD support for TPUs.

Ray Serve:

  • Custom Autoscaling per Deployment Serve now supports user-defined autoscaling policies via AutoscalingContext and AutoscalingPolicy, enabling fine-grained scaling logic at the deployment level. This is part of a large effort where we are adding support for autoscaling based on custom metrics in Serve, see this RFC for more details.
  • Async Inference (Initial Support): Ray Serve introduces asynchronous inference execution, laying the foundation for better throughput and latency in async workloads. Please see this RFC for more details.
  • Major Performance Gains: This version of ray serve brings double digit % performance improvements both in throughput and latency. See release notes for more details.

Ray Serve/Data LLM:

  • We’ve refactored Ray Serve LLM to be fully compatible with the default vllm serve and also now supports vLLM=0.10.
  • We’ve added a prefix cache-aware router with PrefixCacheAffinityRouter for optimized cache utilization; dynamic cache management via reset prefix cache remote methods; enhanced LMCacheConnectorV1 with kv_transfer_config support.

Ray Libraries

Ray Data

🎉 New Features:

  • Wrapped batch indices in a BatchMetadata object to make per-batch metadata explicit. (#55643)
  • Added support for Anti/Semi Join types. (#55272)
  • Introduced an Issue Detection Framework. (#55155)
  • Added an option to enable out-of-order execution for better performance. (#54504)
  • Introduced a StreamingSplit logical operator for DAG rewrite. (#54994)
  • Added a stratify parameter to train_test_split. (#54624)
  • Added Snowflake connectors. (#51429)
  • Updated Hudi integration to support incremental query. (#54301)
  • Added an Actor location tracker. (#54590)

... (truncated)

Commits

Updates torch from 2.7.1 to 2.8.0

Release notes

Sourced from torch's releases.

PyTorch 2.8.0 Release Notes

Highlights

... (truncated)

Commits
  • ba56102 Cherrypick: Add the RunLLM widget to the website (#159592)
  • c525a02 [dynamo, docs] cherry pick torch.compile programming model docs into 2.8 (#15...
  • a1cb3cc [Release Only] Remove nvshmem from list of preload libraries (#158925)
  • c76b235 Move out super large one off foreach_copy test (#158880)
  • 20a0e22 Revert "[Dynamo] Allow inlining into AO quantization modules (#152934)" (#158...
  • 9167ac8 [MPS] Switch Cholesky decomp to column wise (#158237)
  • 5534685 [MPS] Reimplement tri[ul] as Metal shaders (#158867)
  • d19e08d Cherry pick PR 158746 (#158801)
  • a6c044a [cherry-pick] Unify torch.tensor and torch.ops.aten.scalar_tensor behavior (#...
  • 620ebd0 [Dynamo] Use proper sources for constructing dataclass defaults (#158689)
  • Additional commits viewable in compare view

Updates vllm from 0.10.0 to 0.11.0

Release notes

Sourced from vllm's releases.

v0.11.0

Highlights

This release features 538 commits, 207 contributors (65 new contributors)!

  • This release completes the removal of V0 engine. V0 engine code including AsyncLLMEngine, LLMEngine, MQLLMEngine, all attention backends, and related components have been removed. V1 is the only engine in the codebase now.
  • This releases turns on FULL_AND_PIECEWISE as the CUDA graph mode default. This should provide better out of the box performance for most models, particularly fine-grained MoEs, while preserving compatibility with existing models supporting only PIECEWISE mode.

Model Support

  • New architectures: DeepSeek-V3.2-Exp (#25896), Qwen3-VL series (#24727), Qwen3-Next (#24526), OLMo3 (#24534), LongCat-Flash (#23991), Dots OCR (#24645), Ling2.0 (#24627), CWM (#25611).
  • Encoders: RADIO encoder support (#24595), Transformers backend support for encoder-only models (#25174).
  • Task expansion: BERT token classification/NER (#24872), multimodal models for pooling tasks (#24451).
  • Data parallel for vision encoders: InternVL (#23909), Qwen2-VL (#25445), Qwen3-VL (#24955).
  • Speculative decoding: EAGLE3 for MiniCPM3 (#24243) and GPT-OSS (#25246).
  • Features: Qwen3-VL text-only mode (#26000), EVS video token pruning (#22980), Mamba2 TP+quantization (#24593), MRoPE + YaRN (#25384), Whisper on XPU (#25123), LongCat-Flash-Chat tool calling (#24083).
  • Performance: GLM-4.1V 916ms TTFT reduction via fused RMSNorm (#24733), GLM-4 MoE SharedFusedMoE optimization (#24849), Qwen2.5-VL CUDA sync removal (#24741), Qwen3-VL Triton MRoPE kernel (#25055), FP8 checkpoints for Qwen3-Next (#25079).
  • Reasoning: SeedOSS reason parser (#24263).

Engine Core

  • KV cache offloading: CPU offloading with LRU management (#19848, #20075, #21448, #22595, #24251).
  • V1 features: Prompt embeddings (#24278), sharded state loading (#25308), FlexAttention sliding window (#24089), LLM.apply_model (#18465).
  • Hybrid allocator: Pipeline parallel (#23974), varying hidden sizes (#25101).
  • Async scheduling: Uniprocessor executor support (#24219).
  • Architecture: Tokenizer group removal (#24078), shared memory multimodal caching (#20452).
  • Attention: Hybrid SSM/Attention in Triton (#21197), FlashAttention 3 for ViT (#24347).
  • Performance: FlashInfer RoPE 2x speedup (#21126), fused Q/K RoPE 11% improvement (#24511, #25005), 8x spec decode overhead reduction (#24986), FlashInfer spec decode with 1.14x speedup (#25196), model info caching (#23558), inputs_embeds copy avoidance (#25739).
  • LoRA: Optimized weight loading (#25403).
  • Defaults: CUDA graph mode FULL_AND_PIECEWISE (#25444), Inductor standalone compile disabled (#25391).
  • torch.compile: CUDA graph Inductor partition integration (#24281).

Hardware & Performance

  • NVIDIA: FP8 FlashInfer MLA decode (#24705), BF16 fused MoE for Hopper/Blackwell expert parallel (#25503).
  • DeepGEMM: Enabled by default (#24462), 5.5% throughput improvement (#24783).
  • New architectures: RISC-V 64-bit (#22112), ARM non-x86 CPU (#25166), ARM 4-bit fused MoE (#23809).
  • AMD: ROCm 7.0 (#25178), GLM-4.5 MI300X tuning (#25703).
  • Intel XPU: MoE DP accuracy fix (#25465).

Large Scale Serving & Performance

  • Dual-Batch Overlap (DBO): Overlapping computation mechanism (#23693), DeepEP high throughput + prefill (#24845).
  • Data Parallelism: torchrun launcher (#24899), Ray placement groups (#25026), Triton DP/EP kernels (#24588).
  • EPLB: Hunyuan V1 (#23078), Mixtral (#22842), static placement (#23745), reduced overhead (#24573).
  • Disaggregated serving: KV transfer metrics (#22188), NIXL MLA latent dimension (#25902).
  • MoE: Shared expert overlap optimization (#24254), SiLU kernel for DeepSeek-R1 (#24054), Enable Allgather/ReduceScatter backend for NaiveAllToAll (#23964).
  • Distributed: NCCL symmetric memory with 3-4% throughput improvement (#24532), enabled by default for TP (#25070).

Quantization

  • FP8: Per-token-group quantization (#24342), hardware-accelerated instructions (#24757), torch.compile KV cache (#22758), paged attention update (#22222).
  • FP4: NVFP4 for dense models (#25609), Gemma3 (#22771), Llama 3.1 405B (#25135).
  • W4A8: Faster preprocessing (#23972).
  • Compressed tensors: Blocked FP8 for MoE (#25219).

... (truncated)

Commits
  • f71952c [Build/CI] Revert back to Ubuntu 20.04, install python 3.12 with uv (#26103)
  • d100776 [Bugfix] Disable cascade attention with FlashInfer (#26130)
  • c75c2e7 [Deepseek v3.2] Support indexer prefill chunking (#25999)
  • 9d9a2b7 [Small] Prevent bypassing media domain restriction via HTTP redirects (#26035)
  • 6040e0b [BugFix] Fix FI accuracy issue when used for MLA prefill (#26063)
  • 05bf0c5 Update base image to 22.04 (jammy) (#26065)
  • c536881 [BugFix] ChunkedLocalAttention is currently not CG compatible (#26034)
  • ebce361 [BugFix][DP/EP] Fix CUTLASS MLA hang under load (#26026)
  • e4beabd [BugFix] Fix default kv-cache-dtype default for DeepseekV3.2 (#25988)
  • febb688 [Bugfix] Fix __syncwarp on ROCM (#25996)
  • Additional commits viewable in compare view

Updates torch from 2.5.1+cu124 to 2.8.0

Release notes

Sourced from torch's releases.

PyTorch 2.8.0 Release Notes

Highlights

... (truncated)

Commits
  • ba56102 Cherrypick: Add the RunLLM widget to the website (#159592)
  • c525a02 [dynamo, docs] cherry pick torch.compile programming model docs into 2.8 (#15...
  • a1cb3cc [Release Only] Remove nvshmem from list of preload libraries (#158925)
  • c76b235 Move out super large one off foreach_copy test (#158880)
  • 20a0e22 Revert "[Dynamo] Allow inlining into AO quantization modules (#152934)" (#158...
  • 9167ac8 [MPS] Switch Cholesky decomp to column wise (#158237)
  • 5534685 [MPS] Reimplement tri[ul] as Metal shaders (#158867)
  • d19e08d Cherry pick PR 158746 (#158801)
  • a6c044a [cherry-pick] Unify torch.tensor and torch.ops.aten.scalar_tensor behavior (#...
  • 620ebd0 [Dynamo] Use proper sources for constructing dataclass defaults (#158689)
  • Additional commits viewable in compare view

Updates tqdm from 4.66.1 to 4.66.3

Release notes

Sourced from tqdm's releases.

tqdm v4.66.3 stable

tqdm v4.66.2 stable

  • pandas: add DataFrame.progress_map (#1549)
  • notebook: fix HTML padding (#1506)
  • keras: fix resuming training when verbose>=2 (#1508)
  • fix format_num negative fractions missing leading zero (#1548)
  • fix Python 3.12 DeprecationWarning on import (#1519)
  • linting: use f-strings (#1549)
  • update tests (#1549)
  • CI: bump actions (#1549)
Commits

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore <dependency name> major version will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
  • @dependabot ignore <dependency name> minor version will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
  • @dependabot ignore <dependency name> will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
  • @dependabot unignore <dependency name> will remove all of the ignore conditions of the specified dependency
  • @dependabot unignore <dependency name> <ignore condition> will remove the ignore condition of the specified dependency and ignore conditions
    You can disable automated security fix PRs for this repo from the Security Alerts page.

Bumps the pip group with 3 updates in the / directory: [ray](https://github.com/ray-project/ray), [torch](https://github.com/pytorch/pytorch) and [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /runner/helix-diffusers directory: [torch](https://github.com/pytorch/pytorch).
Bumps the pip group with 1 update in the /scripts/knowledge directory: [tqdm](https://github.com/tqdm/tqdm).


Updates `ray` from 2.48.0 to 2.49.2
- [Release notes](https://github.com/ray-project/ray/releases)
- [Commits](ray-project/ray@ray-2.48.0...ray-2.49.2)

Updates `torch` from 2.7.1 to 2.8.0
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](pytorch/pytorch@v2.7.1...v2.8.0)

Updates `vllm` from 0.10.0 to 0.11.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.10.0...v0.11.0)

Updates `torch` from 2.5.1+cu124 to 2.8.0
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](pytorch/pytorch@v2.7.1...v2.8.0)

Updates `tqdm` from 4.66.1 to 4.66.3
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](tqdm/tqdm@v4.66.1...v4.66.3)

---
updated-dependencies:
- dependency-name: ray
  dependency-version: 2.49.2
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: torch
  dependency-version: 2.8.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-version: 0.11.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: torch
  dependency-version: 2.8.0
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: tqdm
  dependency-version: 4.66.3
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Oct 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update Python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant