8000 [Bugfix] Remove NVFP4 scales assertions to fix load_format=dummy by mgoin · Pull Request #18861 · vllm-project/vllm · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[Bugfix] Remove NVFP4 scales assertions to fix load_format=dummy #18861

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 30, 2025

Conversation

mgoin
Copy link
Member
@mgoin mgoin commented May 28, 2025

Currently on 0.9.0 when running vllm serve nvidia/DeepSeek-R1-FP4 --quantization modelopt_fp4 -tp 8 --load-format dummy on A100, we run into

(VllmWorker rank=5 pid=3888806) ERROR 05-28 18:39:46 [multiproc_executor.py:487]   File "/home/mgoin/venvs/vllm-rel/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/utils/marlin_utils_fp4.py", line 25, in fp4_marlin_process_scales
(VllmWorker rank=5 pid=3888806) ERROR 05-28 18:39:46 [multiproc_executor.py:487]     assert (marlin_scales >= 0).all()
(VllmWorker rank=5 pid=3888806) ERROR 05-28 18:39:46 [multiproc_executor.py:487]            ^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=5 pid=3888806) ERROR 05-28 18:39:46 [multiproc_executor.py:487] AssertionError

Then on B200 when running vllm serve nvidia/DeepSeek-R1-FP4 --quantization modelopt_fp4 -tp 4 --load-format dummy, we run into

(VllmWorker rank=3 pid=2693151) ERROR 05-28 18:48:28 [multiproc_executor.py:487]   File "/home/mgoin/code/vllm/vllm/model_executor/layers/quantization/modelopt.py", line 588, in process_weights_after_loading
(VllmWorker rank=3 pid=2693151) ERROR 05-28 18:48:28 [multiproc_executor.py:487]     assert torch.allclose(
(VllmWorker rank=3 pid=2693151) ERROR 05-28 18:48:28 [multiproc_executor.py:487]            ^^^^^^^^^^^^^^^
(VllmWorker rank=3 pid=2693151) ERROR 05-28 18:48:28 [multiproc_executor.py:487] AssertionError: w1_weight_scale_2 must match w3_weight_scale_2

These asserts are hit because dummy weights are likely to have random values. It doesn't seem this check is truly necessary

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mgoin mgoin changed the title [Bugfix] Remove fp4 marlin scales assertion to fix load_format=dummy [Bugfix] Remove NVFP4 scales assertions to fix load_format=dummy May 28, 2025
Signed-off-by: mgoin <mgoin64@gmail.com>
@mgoin
Copy link
Member Author
mgoin commented May 28, 2025

PTAL @pavanimajety @jinzhen-lin

@jinzhen-lin
Copy link
Contributor
jinzhen-lin commented May 29, 2025

In Marlin NVFP4, I changed the scales from FP8-S1E4M3 to a special FP8-S0E5M3 format to speedup the dequantization, this assumed the origin scales to be >=0, if we can always ensure that for real weight, this assertion can be removed.

@mgoin mgoin added ready ONLY add when PR is ready to merge/full CI is needed ci/build labels May 29, 2025
@DarkLight1337 DarkLight1337 merged commit 4d0a154 into vllm-project:main May 30, 2025
69 checks passed
amitm02 pushed a commit to amitm02/vllm that referenced this pull request Jun 1, 2025
…m-project#18861)

Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: amit <amit.man@gmail.com>
amitm02 pushed a commit to amitm02/vllm that referenced this pull request Jun 1, 2025
…m-project#18861)

Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: amit <amit.man@gmail.com>
minpeter pushed a commit to minpeter/vllm that referenced this pull request Jun 24, 2025
…m-project#18861)

Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: minpeter <kali2005611@gmail.com>
amogkam added a commit to character-tech/vllm that referenced this pull request Jul 3, 2025
…quest (#12)

* [Bugfix][ROCm] fix the power of 2 exception from triton_unified_attention.py when running llama4 models and unit test fix (vllm-project#18100)

Signed-off-by: Hongxia Yang <hongxia.yang@amd.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>

* Prevent the cross-encoder logic from being applied to classification tasks (vllm-project#18838)

Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* Add ability to use CUDAGraphs with use_inductor=False (vllm-project#17345)

Signed-off-by: rzou <zou3519@gmail.com>

* [Bugfix][TPU] fix moe custom kernel import (vllm-project#18853)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Doc][Neuron] Update documentation for Neuron (vllm-project#18868)

Signed-off-by: Elaine Zhao <elaineyz@amazon.com>

* Skip device and quant Pydantic validation to make plugin device work (vllm-project#18843)

Signed-off-by: Yikun Jiang <yikunkero@gmail.com>

* Fixes a dead link in nightly benchmark readme (vllm-project#18856)

Signed-off-by: Brent Salisbury <bsalisbu@redhat.com>

* [Neuron] Add multi-LoRA support for Neuron. (vllm-project#18284)

Signed-off-by: Satyajith Chilappagari <satchill@amazon.com>

* [LoRA] Add LoRA support for InternVL  (vllm-project#18842)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Doc] Remove redundant spaces from compatibility_matrix.md (vllm-project#18891)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* [doc] add CLI doc (vllm-project#18871)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] Fix misleading information in the documentation (vllm-project#18845)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Misc] Replace TODO in serving transcription (vllm-project#18895)

Signed-off-by: NickLucche <nlucches@redhat.com>

* [Bugfix] Ensure tensors are contiguous during serialisation (vllm-project#18860)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [BugFix] Update pydantic to fix error on python 3.10 (vllm-project#18852)

Signed-off-by: luka <luka@neuralmagic.com>

* Fix an error in dummy weight loading for quantization models (vllm-project#18855)

Signed-off-by: Chenyaaang <chenyangli@google.com>

* [Misc][Tools][Benchmark] Add benchmark_serving supports for llama.cpp.  (vllm-project#18692)

Signed-off-by: Duyi-Wang <duyi.wang@intel.com>

* [Doc]  Fix codeblocks formatting in LoRA adapters documentation (vllm-project#18907)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* [Bugfix] Fix the failing gte embedding test (vllm-project#18720)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Attention][V1] Toggle for v1 attention backend (vllm-project#18275)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [ROCm][V0][Attention] Revert to the previous FA triton kernel (vllm-project#18226)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [Deprecation] Disallow pos-args other than `model` when initializing `LLM` (vllm-project#18802)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Remove duplicate init for self.vllm_config (vllm-project#18896)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [V1] Allocate kv_cache with stride order for V1 (vllm-project#18775)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [BugFix] Make DP work with connector-delayed new requests (vllm-project#18559)

Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Will Eaton <weaton@redhat.com>

* [P/D] NixlConnector DP fixes (vllm-project#18903)

Signed-off-by: Will Eaton <weaton@redhat.com>

* Use standalone_compile by default in torch >= 2.8.0 (vllm-project#18846)

Signed-off-by: rzou <zou3519@gmail.com>

* [TPU] remove transpose ops in moe kernel (vllm-project#18923)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Bugfix] Fix PP default fallback behavior for V1 (vllm-project#18915)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Misc] Update type annotation for rotary embedding `base` (vllm-project#18914)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [TPU][CI/CD] Clean up docker for TPU tests. (vllm-project#18926)

Signed-off-by: Carol Zheng <cazheng@google.com>

* improve the robustness of parsing vlms config in AutoRound (vllm-project#18894)

Signed-off-by: wenhuach21 <wenhua.cheng@intel.com>

* [Bugfix] Consistent ascii handling in tool parsers (vllm-project#18883)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Model] Use AutoWeightsLoader for mamba2 (vllm-project#18918)

Signed-off-by: iLeGend <824040212@qq.com>

* [docs] fix: fix markdown syntax (vllm-project#18927)

* [ROCm] Remove unnecessary assertion of max_model_len in ROCM_AITER_MLA attention backend. (vllm-project#18938)

Signed-off-by: vllmellm <vllm.ellm@
67E6
embeddedllm.com>

* [Bugfix] Remove NVFP4 scales assertions to fix load_format=dummy (vllm-project#18861)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Deprecation] Remove mean pooling default for `Qwen2EmbeddingModel` (vllm-project#18913)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc]Fix benchmarks/README.md for speculative decoding (vllm-project#18897)

Signed-off-by: rabi <ramishra@redhat.com>

* [doc] add mkdocs doc (vllm-project#18930)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Model] Use in-place adds in SigLIP (vllm-project#18922)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [Bugfix][Failing Test] Fix test_vllm_port.py (vllm-project#18618)

Signed-off-by: rabi <ramishra@redhat.com>

* [Misc]Fix typo (vllm-project#18947)

* [Bugfix][TPU] Fix tpu model runner testcase failure (vllm-project#18810)

Signed-off-by: Carol Zheng <cazheng@google.com>

* [CI/Build] remove regex from build dependencies (vllm-project#18945)

Signed-off-by: Daniele Trifirò <dtrifiro@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Feature] minicpm eagle support (vllm-project#18943)

Signed-off-by: huangyuxiang03 <huangyx0321@gmail.com>
Co-authored-by: huangyuxiang03 <huangyx0321@gmail.com>

* [doc] show the count for fork and watch (vllm-project#18950)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Docs] Update SECURITY.md with link to our security guide (vllm-project#18961)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Improve "failed to get the hash of the compiled graph" error (vllm-project#18956)

Signed-off-by: rzou <zou3519@gmail.com>

* [Perf] API-server scaleout with many-to-many server-engine comms  (vllm-project#17546)

* Benchmark script for fp8 vs bf16 gemm (vllm-project#17126)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [VLM] Add PP support and fix GPTQ inference for Ovis models (vllm-project#18958)

Signed-off-by: isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <2037008807@qq.com>

* [Misc] add group_size is -1 in awq quantization (vllm-project#18910)

Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>

* Tool parser regex timeout handling (vllm-project#18960)

Signed-off-by: Will Eaton <weaton@redhat.com>

* [Docs] Correct multiprocessing design doc (vllm-project#18964)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* create util function for batched arange (vllm-project#18937)

* [Frontend] Add rerank support to run_batch endpoint (vllm-project#16278)

Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>

* [Misc] Fix estimated max model len msg (vllm-project#18966)

Signed-off-by: Yong Hoon Shin <yhshin@meta.com>

* [Bugfix]: Fix the incompatibility issue with Structured Outputs when Thinking is disabled (vllm-project#18879)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* fix security issue of logging llm output (vllm-project#18980)

Signed-off-by: Lu Fang <fanglu@fb.com>
Co-authored-by: Lucia (Lu) Fang <fanglu@meta.com>

* [Neuron] Add Multi-Modal model support for Neuron (vllm-project#18921)

Signed-off-by: Satyajith Chilappagari <satchill@amazon.com>
Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>
Co-authored-by: Rohith Nallamaddi <nalrohit@amazon.com>
Co-authored-by: FeliciaLuo <luof@amazon.com>
Co-authored-by: Elaine Zhao <elaineyz@amazon.com>

* [doc] fix the list rendering issue - security.md (vllm-project#18982)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [BugFix] Pydantic part 2 (vllm-project#18911)

Signed-off-by: luka <luka@neuralmagic.com>

* [FEAT][ROCm] Add AITER grouped topk for DeepSeekV2 (vllm-project#18825)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Bugfix] Fix for issue 17396 (vllm-project#18773)

Signed-off-by: Fred Reiss <frreiss@us.ibm.com>

* [ROCm][Kernel] Add gfx950 support for skinny gemms (vllm-project#18010)

Signed-off-by: charlifu <charlifu@amd.com>

* [P/D] NixlConnector use cache device index for memory registration (vllm-project#18969)

Signed-off-by: Piotr Tarasiewicz <ptarasiewicz@nvidia.com>

* [BugFix] Fix multi-node offline data-parallel (vllm-project#18981)

Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Yizhou Liu <liu_yizhou@outlook.com>

* [Misc] add return token strs for tokenize (vllm-project#18941)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc][Benchmark] Add support for CustomDataset (vllm-project#18511)

* [Bugfix] Fix EAGLE3 broken logits (vllm-project#18909)

Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>

* [Core] Rework dtype resolution (vllm-project#18751)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [LoRA] Support dynamically initialize `packed_modules_mapping` for VLM with arbitrary components (vllm-project#18987)

Signed-off-by: isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <2037008807@qq.com>

* [doc] small fix -  mkdocs (vllm-project#18996)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Let max_num_batched_tokens use human_readable_int for large numbers (vllm-project#18968)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [BugFix] fix data parallel construct ipv6 url addres (vllm-project#18991)

Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>

* [BugFix] Fix incorrect metrics shutdown error log message (vllm-project#18992)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [doc] wrong output (vllm-project#19000)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc] reuse num_tokens_across_dp of get_dp_padding to avoid unnecessary dp all reduce in set_forward_context (vllm-project#18935)

Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: zhuhaoran <zhuhaoran.zhr@alibaba-inc.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>

* [Bugfix][Nixl] Fix DP Metadata Handshake (vllm-project#19008)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* [Core] Support inplace model weights loading (vllm-project#18745)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [doc] add pytest tips (vllm-project#19010)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Model] enable data parallel for Llama4 vision encoder (vllm-project#18368)

Signed-off-by: yzhen <yzhen@devgpu093.cco2.facebook.com>
Co-authored-by: yZhen <yZhen@fb.com>
Co-authored-by: yzhen <yzhen@devgpu093.cco2.facebook.com>

* [Frontend] enable custom logging for the uvicorn server (OpenAI API server) (vllm-project#18403)

Signed-off-by: François Paupier <francois.paupier@gmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Bugfix][Model] Attempt to fix eagle in V0. (vllm-project#18978)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* add an absolute path for run.sh (vllm-project#18258)

Signed-off-by: calvin chen <120380290@qq.com>

* [Hardware][TPU] Initial support of model parallelism with single worker using SPMD (vllm-project#18011)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Hossein Sarshar <hossein.sarshar@gmail.com>
Co-authored-by: Chengji Yao <chengjiyao@google.com>

* [Doc] Remove duplicate TOCs during MkDocs migration (vllm-project#19021)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* [Bugfix][EP+DP] Use pplx-kernel internode instead of intranode (vllm-project#19034)

Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Adding "LoRA Test %N" to AMD production tests (vllm-project#18929)

Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>

* [CPU][CI] Re-enable the CPU CI tests (vllm-project#19046)

Signed-off-by: jiang.li <jiang1.li@intel.com>

* [ROCm][Build] Clean up the ROCm build (vllm-project#19040)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [V1] Support DP with Ray (vllm-project#18779)

* Add tarsier model support (vllm-project#18985)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [bugfix] small fix logic issue (vllm-project#18999)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Reduce logs in CLI scripts and plugin loader (vllm-project#18970)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Bugfix] Use cmake 3.26.1 instead of 3.26 to avoid build failure (vllm-project#19019)

Signed-off-by: Lu Fang <lufang@fb.com>

* [v1][KVCacheManager] Rename BlockHashType to BlockHash (vllm-project#19015)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* Update docker docs with ARM CUDA cross-compile (vllm-project#19037)

Signed-off-by: mgoin <michael@neuralmagic.com>

* [Doc] Add InternVL LoRA support  (vllm-project#19055)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Misc] Update `WeightsMapper` for qwen2-vl/qwen2.5-vl (vllm-project#19054)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Doc] Update V1 user guide for embedding and enc-dec models (vllm-project#19060)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [doc] clarify windows support (vllm-project#19088)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [CI/Build] Remove V0 LoRA test (vllm-project#19066)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* Fix underscores in dict keys passed via CLI (vllm-project#19030)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Bugfix] disable processor cache  (vllm-project#19068)

Signed-off-by: raushan <raushan@huggingface.co>

* [Doc] Improve the Pull Request template with key components (vllm-project#19086)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Misc] Add missing `_Backend` enums (vllm-project#19081)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [Misc] fix: add miss best_of param validation (vllm-project#18555)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [Misc] Add SPDX-FileCopyrightText  (vllm-project#19100)

Signed-off-by: simon-mo <simon.mo@hey.com>

* [Doc] Readme standardization (vllm-project#18695)

Co-authored-by: Soren Dreano <soren@numind.ai>

* [doc] update docker version (vllm-project#19074)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Kernel] DeepEP dispatch-combine kernel integration (vllm-project#18434)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>

* [V1] Support cross-layer KV sharing (vllm-project#18212)

Signed-off-by: Yong Hoon Shin <yhshin@meta.com>

* [Perf] Tune `scaled_fp8_quant` by increasing vectorization (vllm-project#18844)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Fix interaction between `Optional` and `Annotated` in CLI typing (vllm-project#19093)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yikun Jiang <yikun@apache.org>

* [v1] Re-init input batch for multiple kv cache groups (vllm-project#18654)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [V1][Spec Decode][Ngram] 1.35x gain -> 1.95x gain on InstructCoder with prompt fix (vllm-project#18971)

* [Bugfix] get_num_blocks_to_allocate with null_block (vllm-project#19031)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Bugfix]: Fix the incompatibility issue with tool_choice 'required' when Thinking is enabled (vllm-project#19075)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Bugfix][P/D] Fix Prefix Cache Bug (vllm-project#18411)

Signed-off-by: nicklucche <nlucches@redhat.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>

* [Bugfix] Max concurrency estimation and check_enough_kv_cache_memory for models with sliding window layers (vllm-project#19029)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* feat: add data parallel rank to KVEventBatch (vllm-project#18925)

* [Misc] Fix path and python alias errors in disagg_prefill exmaples (vllm-project#18919)

* [Docs] Add developer doc about CI failures (vllm-project#18782)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [CPU] V1 support for the CPU backend (vllm-project#16441)

* [Core] Cast multimodal input in hf processor (vllm-project#18862)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [KERNEL] Sampler. CUDA kernel for applying repetition penalty (vllm-project#18437)

* [Cleanup][v1]:remote guided-decoding-backend for example (vllm-project#19059)

Signed-off-by: calvin chen <120380290@qq.com>

* [NVIDIA] Add Cutlass MLA backend (vllm-project#17625)

* [Bugfix] Fix FA3 full cuda graph correctness (vllm-project#19106)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* Fix vllm-project#19130 (vllm-project#19132)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [TPU] Skip hanging tests (vllm-project#19115)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* Fix ValueError: Missing value for tag key(s): model_name,engine. (vllm-project#19113)

Signed-off-by: Seiji Eicher <seiji@anyscale.com>

* [Misc] Add packages for benchmark as extra dependency (vllm-project#19089)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Improve the output precision of embedding models (vllm-project#19092)

* [CI/Build][Bugfix] Ensure compatibility with transformers 4.52 (vllm-project#18678)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Add DeepSeek-R1-0528 function call chat template (vllm-project#18874)

Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>

* Sm100 blockwise fp8 swap ab (vllm-project#18564)

* [Doc] Update V1 Guide for embedding models (vllm-project#19141)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Allow AsyncLLMEngine.generate to target a specific DP rank (vllm-project#19102)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* [Bugfix][EP+DP] Fix internode check (vllm-project#19112)

Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>

* [Perf] Tunings for SM100 FP8 CUTLASS kernel (vllm-project#18778)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [TPU] Update dynamo dump file name in compilation test (vllm-project#19108)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [Bugfix] fix v1 cpu worker fails on macOS (vllm-project#19121)

* [Kernel] Integrate batched/masked deepgemm kernel (vllm-project#19111)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun <vsundarr@redhat.com>

* [Misc] refactor: simplify EngineCoreClient.make_async_mp_client in AsyncLLM (vllm-project#18817)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [P/D] Heterogeneous TP (vllm-project#18833)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [doc] small fix (vllm-project#19167)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix][Nixl] Fix full prefix cache hit bug (vllm-project#18632)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix port handling in make_zmq_path (vllm-project#19117)

* [Torch Nightly]add missing dependency (vllm-project#18770)

Signed-off-by: Yang Wang <elainewy@meta.com>

* Handle non-serializable objects when dumping benchmark results (vllm-project#19114)

* [BugFix][Minor] Fix full cuda graph bug when max_num_seqs < 512 (vllm-project#19171)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Bugfix]: Fix the incompatibility issue with stream when Thinking is disabled (vllm-project#19135)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Build] Annotate wheel and container path for release workflow (vllm-project#19162)

Signed-off-by: simon-mo <simon.mo@hey.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* [Misc] Remove unnecessary fallback to prefill-decode attention (vllm-project#19138)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Misc] Do not override NCCL_CUMEM_ENABLE if set explicitly (vllm-project#19105)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Frontend] improve vllm run-batch --help display (vllm-project#19187)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] properly catch PIL-related errors for vision models when incorrect data urls are provided (vllm-project#19202)

Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>

* [mistral_common] Add v11 tokenizer (vllm-project#19193)

Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for DeepSeek-R1/V3 (vllm-project#19205)

* [Hardware][NVIDIA] FP4 MoE kernel optimization (vllm-project#19110)

Signed-off-by: Chiyue Wei <chiyuew@nvidia.com>
Co-authored-by: Chiyue Wei <chiyuew@nvidia.com>

* [MISC][Bugfix] Use less CPU when message queue has been empty for some time (vllm-project#16226)

Signed-off-by: Povilas Kanapickas <povilas@radix.lt>

* [P/D][NixlConnector] Enable FlashInfer backend (vllm-project#19090)

* [Quantization] Skip Fp4 Test for `compressed-tensors` (vllm-project#19217)

* [V1] Use FlashInfer by default on Blackwell GPUs (vllm-project#19118)

* [Model] NemotronH support (vllm-project#18863)

Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>

* Fix AOPerModuleConfig name changes (vllm-project#18869)

Signed-off-by: Jerry Zhang <jerryzh168@gmail.com>

* [Bugfix] Fix EAGLE vocab embedding construction for Llama 70B (vllm-project#19033)

Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>

* [v1] Hybrid Memory Allocator (vllm-project#17996)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [TPU] update torch_xla pin (vllm-project#19231)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* Support allowed_token_ids in ChatCompletionRequest (vllm-project#19143)

Signed-off-by: Xu Song <xusong.vip@gmail.com>

* [Chore] update CODEOWNERS (vllm-project#19247)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [v1][P/D] Fix a edge case in kv cache schedule (vllm-project#19182)

Co-authored-by: jinghui <jinghui@fb.com>

* [TPU] fix kv cache dtype in model runner (vllm-project#19244)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Quantization] Bump compressed-tensors version; update NVFP4A16 test model (vllm-project#19224)

Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com>

* [Docs] Improve V1 KVConnector interface documentation (vllm-project#19172)

Signed-off-by: Nick Hill <nhill@redhat.com>

* Fix CompilationConfig repr (vllm-project#19091)

Signed-off-by: rzou <zou3519@gmail.com>

* Unit Test for run_dp_sharded_vision_model (vllm-project#19103)

Signed-off-by: Siqi Yan <siqi@meta.com>
Co-authored-by: Siqi Yan <siqi@meta.com>

* [Model] Optimize nemotron_h implementation (vllm-project#19249)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Core] Raise when non-multi-instance DP clients target a DP rank (vllm-project#19227)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* improve logits bias (vllm-project#19041)

* Fixed ppc build when it runs on non-RHEL based linux distros (vllm-project#18422)

Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>

* [BugFix] Fix MultiConnector test after HMA changes (vllm-project#19291)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix][Core] Update cancellation logic in `generate()` to handle Generator exits (vllm-project#19225)

Co-authored-by: Adolfo Victoria <adovi@meta.com>

* [Core] Fix abrupt request abort (vllm-project#18485)

Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>

Co-authored-by: Nick Hill <nhill@redhat.com>

* [BugFix] Fix tpu_model_runner block_id concatenation (vllm-project#19228)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Misc][Tools][Benchmark] Fix and improve auto tune script (vllm-project#19163)

Signed-off-by: Chenyaaang <chenyangli@google.com>

* [Build][ROCm] Update Dockerfile.rocm (vllm-project#19296)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>

* [Easy][Test] Simplify test_function_tool_use with multiple parametrizes (vllm-project#19269)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Kernel] Integrate CUTLASS MoE kernel with PPLX (vllm-project#18762)

Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [TPU][Test] Add script to run benchmark on TPU for buildkite (vllm-project#19039)

Signed-off-by: Qiliang Cui <derrhein@gmail.com>

* [CI][PowerPC] Use a more appropriate way to select testcase in tests/models/language/pooling/test_embedding.py (vllm-project#19253)

Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>

* Add FlexAttention to V1 (vllm-project#16078)

Signed-off-by: drisspg <drisspguessous@gmail.com>

* [Misc] refactor context extension (vllm-project#19246)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [CI/Build] Improve Llama GGUF test robustness (vllm-project#19287)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Nit][Benchmark]Fix example in benchmark_serving_structured_output.py (vllm-project#19311)

Signed-off-by: Lifan Shen <lifans@meta.com>

* [AMD] Update compatible packaging version (vllm-project#19309)

Signed-off-by: pramkuma <Pramendra.Kumar@amd.com>

* [BugFix][V1] Fix memory profiling bug (vllm-project#18974)

Signed-off-by: luka <luka@neuralmagic.com>

* [Bugfix]: Fix TypeError: 'float' object cannot be interpreted as an integer (vllm-project#19283)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Bugfix] Re-enable use_cudagraph in vLLM v1 (vllm-project#19299)

Signed-off-by: Richard Zou <zou3519@gmail.com>

* [Misc] Change tests/compile to use VLLM_V1 by default (vllm-project#19302)

Signed-off-by: rzou <zou3519@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B (vllm-project#19315)

Signed-off-by: Xu Wenqing <xuwq1993@qq.com>

* [Hardware][POWER] Add IBM POWER11 Support to CPU Extension Detection (vllm-project#19082)

Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>

* [Quantization] Add compressed-tensors NVFP4 support (vllm-project#18312)

* [Multi Modal] Add an env var for message queue max chunk bytes  (vllm-project#19242)

Signed-off-by: yZhen <yZhen@fb.com>
Co-authored-by: yZhen <yZhen@fb.com>

* [Bugfix] model_max_length should consider max_model_len in tokenizer_config (vllm-project#19201)

* [Deprecation] Remove `inputs` arg fallback in Engine classes (vllm-project#18799)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Add documentation update reminder to PR template (vllm-project#19289)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Frontend] Remove unreachable code from llm.py (vllm-project#19288)

Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com>

* [Misc] Cleanup compilation tests (vllm-project#19343)

Signed-off-by: rzou <zou3519@gmail.com>

* [doc] improve ci doc (vllm-project#19307)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Doc] Fix description in the Automatic Prefix Caching design doc (vllm-project#19333)

Signed-off-by: cr7258 <chengzw258@163.com>

* [CI/Build] Fix LoRA test (vllm-project#19350)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Fix] Allow kernel compilation for CUDA capability 8.7 (vllm-project#19328)

Signed-off-by: Conroy Cheers <conroy@corncheese.org>

* [CI] Introduce rules for llama auto-label (vllm-project#19323)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Docs] Fix a bullet list in usage/security.md (vllm-project#19358)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* [full_graph] Fix query_start_loc padding (vllm-project#19321)

Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>

* [v1] Add fp32 support to v1 engine through flex attn (vllm-project#19319)

Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Misc] Fixes and Optimizations for DeepEP + DeepGEMM combination. (vllm-project#19298)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun <vsundarr@redhat.com>

* [Bugfix][Core] Prevent token lengths exceeding `max_model_len` in V0 (vllm-project#19348)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Quantization] Bump compressed-tensors version (vllm-project#19295)

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

* [Frontend] Make TIMEOUT_KEEP_ALIVE configurable through env var (vllm-project#18472)

Signed-off-by: liusiqian <liusiqian@tal.com>

* [TPU]Fix KV cache sharing tests (vllm-project#19371)

* [HOT-FIX] Add `kv_sharing_target_layer_name` argument to cutlass_mla backend (vllm-project#19374)

Signed-off-by: Pavani Majety <pmajety@nvidia.com>

* [Misc] Fix a config typo in disable_hybrid_kv_cache_manager configuration (vllm-project#19383)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [V1] Reuse V0's memory_profiling util for gpu worker memory profiling (vllm-project#19312)

Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>

* [Bugfix] Fix benchmark_moe.py (vllm-project#19016)

Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>

* Use xla flag to improve the quantized model performance (vllm-project#19303)

Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>

* Fix docs/mkdocs/hooks/remove_announcement.py (vllm-project#19382)

* [Frontend] Add tqdm_leave_pbar to control progress bar visibility (vllm-project#19357)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Core] Use tuple for kv cache group block ids (vllm-project#19175)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix modelscope token passed in (vllm-project#19389)

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>

* [Core] Batch multi modal input using pinned memory (vllm-project#19169)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* Add security warning to bug report template (vllm-project#19365)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Misc] refactor neuron_multimodal and profiling (vllm-project#19397)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Add clear documentation around the impact of debugging flag (vllm-project#19369)

Signed-off-by: Anna Pendleton <pendleton@google.com>

* Automatically bind CPU OMP Threads of a rank to CPU ids of a NUMA node. (vllm-project#17930)

Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Co-authored-by: Li, Jiang <bigpyj64@gmail.com>

* Revert "[v1] Add fp32 support to v1 engine through flex attn" (vllm-project#19404)

* [BugFix][FlashInfer] Fix attention backend interface mismatch with unexpected keyword `use_irope` (vllm-project#19134)

Signed-off-by: Yunqiu Guo <guorachel@meta.com>

* [BugFix][CPU] Fix CPU CI by ignore collecting test_pixtral (vllm-project#19411)

Signed-off-by: jiang.li <jiang1.li@intel.com>

* Simplify ep kernels installation (vllm-project#19412)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Slight improvement of the BNB  (vllm-project#19418)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* fix

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

* fix

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

* update config

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

* add

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

---------

Signed-off-by: Hongxia Yang <hongxia.yang@amd.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: rzou <zou3519@gmail.com>
Signed-off-by: Chengji Yao <chengjiyao@google.com>
Signed-off-by: Elaine Zhao <elaineyz@amazon.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Signed-off-by: Brent Salisbury <bsalisbu@redhat.com>
Signed-off-by: Satyajith Chilappagari <satchill@amazon.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
Signed-off-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>
Signed-off-by: luka <luka@neuralmagic.com>
Signed-off-by: Chenyaaang <chenyangli@google.com>
Signed-off-by: Duyi-Wang <duyi.wang@intel.com>
Signed-off-by: Zerohertz <ohg3417@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: googs1025 <googs1025@gmail.com>
Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Will Eaton <weaton@redhat.com>
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Carol Zheng <cazheng@google.com>
Signed-off-by: wenhuach21 <wenhua.cheng@intel.com>
Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
Signed-off-by: iLeGend <824040212@qq.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Signed-off-by: rabi <ramishra@redhat.com>
Signed-off-by: Daniele Trifirò <dtrifiro@redhat.com>
Signed-off-by: huangyuxiang03 <huangyx0321@gmail.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: isotr0py <2037008807@qq.com>
Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>
Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: Lu Fang <fanglu@fb.com>
Signed-off-by: Fred Reiss <frreiss@us.ibm.com>
Signed-off-by: charlifu <charlifu@amd.com>
Signed-off-by: Piotr Tarasiewicz <ptarasiewicz@nvidia.com>
Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Signed-off-by: yzhen <yzhen@devgpu093.cco2.facebook.com>
Signed-off-by: François Paupier <francois.paupier@gmail.com>
Signed-off-by: calvin chen <120380290@qq.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Signed-off-by: jiang.li <jiang1.li@intel.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Signed-off-by: Lu Fang <lufang@fb.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: mgoin <michael@neuralmagic.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: raushan <raushan@huggingface.co>
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Varun <vsundarr@redhat.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>
Signed-off-by: Jon Swenson <jmswen@gmail.com>
Signed-off-by: Yang Wang <elainewy@meta.com>
Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>
Signed-off-by: Chiyue Wei <chiyuew@nvidia.com>
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Signed-off-by: Jerry Zhang <jerryzh168@gmail.com>
Signed-off-by: Xu Song <xusong.vip@gmail.com>
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com>
Signed-off-by: Siqi Yan <siqi@meta.com>
Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: Qiliang Cui <derrhein@gmail.com>
Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>
Signed-off-by: drisspg <drisspguessous@gmail.com>
Signed-off-by: Lifan Shen <lifans@meta.com>
Signed-off-by: pramkuma <Pramendra.Kumar@amd.com>
Signed-off-by: Richard Zou <zou3519@gmail.com>
Signed-off-by: Xu Wenqing <xuwq1993@qq.com>
Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Signed-off-by: yZhen <yZhen@fb.com>
Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com>
Signed-off-by: cr7258 <chengzw258@163.com>
Signed-off-by: Conroy Cheers <conroy@corncheese.org>
Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: liusiqian <liusiqian@tal.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>
Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>
Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Anna Pendleton <pendleton@google.com>
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Signed-off-by: Yunqiu Guo <guorachel@meta.com>
Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>
Co-authored-by: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Richard Zou <zou3519@users.noreply.github.com>
Co-authored-by: Chengji Yao <chengjiyao@google.com>
Co-authored-by: aws-elaineyz <elaineyz@amazon.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Brent Salisbury <bsalisbu@redhat.com>
Co-authored-by: Satyajith Chilappagari <satchill@amazon.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Michael Yao <haifeng.yao@daocloud.io>
Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: Lukas Geiger <lukas.geiger94@gmail.com>
Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com>
Co-authored-by: Chenyaaang <42742451+Chenyaaang@users.noreply.github.com>
Co-authored-by: Duyi-Wang <duyi.wang@intel.com>
Co-authored-by: Hyogeun Oh (오효근) <ohg3417@gmail.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: CYJiang <86391540+googs1025@users.noreply.github.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Will Eaton <weaton@redhat.com>
Co-authored-by: Will Eaton <wseaton@users.noreply.github.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Carol Zheng <cazheng@google.com>
Co-authored-by: Wenhua Cheng <wenhua.cheng@intel.com>
Co-authored-by: Chauncey <chaunceyjiang@gmail.com>
Co-authored-by: iLeGend <youzhi.jin@intel.com>
Co-authored-by: H <linhaibin.eric@gmail.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: Rabi Mishra <ramishra@redhat.com>
Co-authored-by: Always-Naive <97138029+Always-Naive@users.noreply.github.com>
Co-authored-by: Daniele <36171005+dtrifiro@users.noreply.github.com>
Co-authored-by: Shawn Huang <57223022+huangyuxiang03@users.noreply.github.com>
Co-authored-by: huangyuxiang03 <huangyx0321@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: rongfu.leng <rongfu.leng@daocloud.io>
Co-authored-by: Yu Guo <82124926+yuguo68@users.noreply.github.com>
Co-authored-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Co-authored-by: Yong Hoon Shin <48474650+sarckk@users.noreply.github.com>
Co-authored-by: Lucia Fang <116399278+luccafong@users.noreply.github.com>
Co-authored-by: Lucia (Lu) Fang <fanglu@meta.com>
Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>
Co-authored-by: Rohith Nallamaddi <nalrohit@amazon.com>
Co-authored-by: FeliciaLuo <luof@amazon.com>
Co-authored-by: Fred Reiss <frreiss@us.ibm.com>
Co-authored-by: Charlie Fu <charlifu@amd.com>
Co-authored-by: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Co-authored-by: Yizhou Liu <liu_yizhou@outlook.com>
Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Co-authored-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Co-authored-by: zhrrr <43847754+izhuhaoran@users.noreply.github.com>
Co-authored-by: zhuhaoran <zhuhaoran.zhr@alibaba-inc.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>
Co-authored-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Co-authored-by: jennyyyyzhen <47012288+jennyyyyzhen@users.noreply.github.com>
Co-authored-by: yZhen <yZhen@fb.com>
Co-authored-by: yzhen <yzhen@devgpu093.cco2.facebook.com>
Co-authored-by: Frαnçois <francois.paupier@gmail.com>
Co-authored-by: Calvin Chen <45745657+calvin0327@users.noreply.github.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Hossein Sarshar <hossein.sarshar@gmail.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Concurrensee <yidawu@alumni.cmu.edu>
Co-authored-by: Li, Jiang <jiang1.li@intel.com>
Co-authored-by: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: SorenDreano <71752785+SorenDreano@users.noreply.github.com>
Co-authored-by: Soren Dreano <soren@numind.ai>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Yikun Jiang <yikun@apache.org>
Co-authored-by: Yan Ru Pei <yanrpei@gmail.com>
Co-authored-by: Jiaxin Shan <seedjeffwan@gmail.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Vadim Gimpelson <156319763+vadiklyutiy@users.noreply.github.com>
Co-authored-by: Kaixi Hou <kaixih@nvidia.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: Seiji Eicher <58963096+eicherseiji@users.noreply.github.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Co-authored-by: Xu Wenqing <121550081+Xu-Wenqing@users.noreply.github.com>
Co-authored-by: Lain <fusiyuan2000@hotmail.com>
Co-authored-by: jmswen <jmswen@users.noreply.github.com>
Co-authored-by: Kebe <mail@kebe7jun.com>
Co-authored-by: Yang Wang <elainewy@meta.com>
Co-authored-by: Huy Do <huydhn@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Chiyue Wei <92623189+dubcyfor3@users.noreply.github.com>
Co-authored-by: Chiyue Wei <chiyuew@nvidia.com>
Co-authored-by: Povilas Kanapickas <povilas@radix.lt>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: Luis Vega <vegaluisjose@users.noreply.github.com>
Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Co-authored-by: Jerry Zhang <jerryzh168@gmail.com>
Co-authored-by: Xu Song <xusong.vip@gmail.com>
Co-authored-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: Jinghui Zhang <jinghuizhang0804@gmail.com>
Co-authored-by: jinghui <jinghui@fb.com>
Co-authored-by: Siqi Yan <ysq0807@hotmail.com>
Co-authored-by: Siqi Yan <siqi@meta.com>
Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Co-authored-by: Adolfo Victoria <adolfokarim@gmail.com>
Co-authored-by: Adolfo Victoria <adovi@meta.com>
Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
Co-authored-by: ElizaWszola <ewszola@redhat.com>
Co-authored-by: QiliangCui <derrhein@gmail.com>
Co-authored-by: Aaruni Aggarwal <47731267+AaruniAggarwal@users.noreply.github.com>
Co-authored-by: Driss Guessous <32754868+drisspg@users.noreply.github.com>
Co-authored-by: Lifans <draftbks@gmail.com>
Co-authored-by: pramenku <7664080+pramenku@users.noreply.github.com>
Co-authored-by: Akash kaothalkar <61960177+Akashcodes732@users.noreply.github.com>
Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Co-authored-by: Kseniya Parkhamchuk <43078183+KsuParkhamchuk@users.noreply.github.com>
Co-authored-by: Se7en <chengzw258@163.com>
Co-authored-by: Conroy Cheers <conroy@corncheese.org>
Co-authored-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: liusiqian-tal <141730978+liusiqian-tal@users.noreply.github.com>
Co-authored-by: Pavani Majety <pmajety@nvidia.com>
Co-authored-by: Ye (Charlotte) Qi <yeq@meta.com>
Co-authored-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>
Co-authored-by: XiongfeiWei <isaacwxf23@gmail.com>
Co-authored-by: Li Wang <wangli858794774@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Anna Pendleton <pendleton@google.com>
Co-authored-by: Louie Tsai <louie.tsai@intel.com>
Co-authored-by: Li, Jiang <bigpyj64@gmail.com>
Co-authored-by: Rachel Guo <35738743+YUNQIUGUO@users.noreply.github.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
tanujtiwari1998 pushed a commit to character-tech/vllm that referenced this pull request Jul 7, 2025
…quest (#12)

* [Bugfix][ROCm] fix the power of 2 exception from triton_unified_attention.py when running llama4 models and unit test fix (vllm-project#18100)

Signed-off-by: Hongxia Yang <hongxia.yang@amd.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>

* Prevent the cross-encoder logic from being applied to classification tasks (vllm-project#18838)

Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* Add ability to use CUDAGraphs with use_inductor=False (vllm-project#17345)

Signed-off-by: rzou <zou3519@gmail.com>

* [Bugfix][TPU] fix moe custom kernel import (vllm-project#18853)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Doc][Neuron] Update documentation for Neuron (vllm-project#18868)

Signed-off-by: Elaine Zhao <elaineyz@amazon.com>

* Skip device and quant Pydantic validation to make plugin device work (vllm-project#18843)

Signed-off-by: Yikun Jiang <yikunkero@gmail.com>

* Fixes a dead link in nightly benchmark readme (vllm-project#18856)

Signed-off-by: Brent Salisbury <bsalisbu@redhat.com>

* [Neuron] Add multi-LoRA support for Neuron. (vllm-project#18284)

Signed-off-by: Satyajith Chilappagari <satchill@amazon.com>

* [LoRA] Add LoRA support for InternVL  (vllm-project#18842)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Doc] Remove redundant spaces from compatibility_matrix.md (vllm-project#18891)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* [doc] add CLI doc (vllm-project#18871)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] Fix misleading information in the documentation (vllm-project#18845)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Misc] Replace TODO in serving transcription (vllm-project#18895)

Signed-off-by: NickLucche <nlucches@redhat.com>

* [Bugfix] Ensure tensors are contiguous during serialisation (vllm-project#18860)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [BugFix] Update pydantic to fix error on python 3.10 (vllm-project#18852)

Signed-off-by: luka <luka@neuralmagic.com>

* Fix an error in dummy weight loading for quantization models (vllm-project#18855)

Signed-off-by: Chenyaaang <chenyangli@google.com>

* [Misc][Tools][Benchmark] Add benchmark_serving supports for llama.cpp.  (vllm-project#18692)

Signed-off-by: Duyi-Wang <duyi.wang@intel.com>

* [Doc]  Fix codeblocks formatting in LoRA adapters documentation (vllm-project#18907)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* [Bugfix] Fix the failing gte embedding test (vllm-project#18720)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Attention][V1] Toggle for v1 attention backend (vllm-project#18275)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [ROCm][V0][Attention] Revert to the previous FA triton kernel (vllm-project#18226)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [Deprecation] Disallow pos-args other than `model` when initializing `LLM` (vllm-project#18802)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Remove duplicate init for self.vllm_config (vllm-project#18896)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [V1] Allocate kv_cache with stride order for V1 (vllm-project#18775)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [BugFix] Make DP work with connector-delayed new requests (vllm-project#18559)

Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Will Eaton <weaton@redhat.com>

* [P/D] NixlConnector DP fixes (vllm-project#18903)

Signed-off-by: Will Eaton <weaton@redhat.com>

* Use standalone_compile by default in torch >= 2.8.0 (vllm-project#18846)

Signed-off-by: rzou <zou3519@gmail.com>

* [TPU] remove transpose ops in moe kernel (vllm-project#18923)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Bugfix] Fix PP default fallback behavior for V1 (vllm-project#18915)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Misc] Update type annotation for rotary embedding `base` (vllm-project#18914)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [TPU][CI/CD] Clean up docker for TPU tests. (vllm-project#18926)

Signed-off-by: Carol Zheng <cazheng@google.com>

* improve the robustness of parsing vlms config in AutoRound (vllm-project#18894)

Signed-off-by: wenhuach21 <wenhua.cheng@intel.com>

* [Bugfix] Consistent ascii handling in tool parsers (vllm-project#18883)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Model] Use AutoWeightsLoader for mamba2 (vllm-project#18918)

Signed-off-by: iLeGend <824040212@qq.com>

* [docs] fix: fix markdown syntax (vllm-project#18927)

* [ROCm] Remove unnecessary assertion of max_model_len in ROCM_AITER_MLA attention backend. (vllm-project#18938)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Bugfix] Remove NVFP4 scales assertions to fix load_format=dummy (vllm-project#18861)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Deprecation] Remove mean pooling default for `Qwen2EmbeddingModel` (vllm-project#18913)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc]Fix benchmarks/README.md for speculative decoding (vllm-project#18897)

Signed-off-by: rabi <ramishra@redhat.com>

* [doc] add mkdocs doc (vllm-project#18930)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Model] Use in-place adds in SigLIP (vllm-project#18922)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [Bugfix][Failing Test] Fix test_vllm_port.py (vllm-project#18618)

Signed-off-by: rabi <ramishra@redhat.com>

* [Misc]Fix typo (vllm-project#18947)

* [Bugfix][TPU] Fix tpu model runner testcase failure (vllm-project#18810)

Signed-off-by: Carol Zheng <cazheng@google.com>

* [CI/Build] remove regex from build dependencies (vllm-project#18945)

Signed-off-by: Daniele Trifirò <dtrifiro@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Feature] minicpm eagle support (vllm-project#18943)

Signed-off-by: huangyuxiang03 <huangyx0321@gmail.com>
Co-authored-by: huangyuxiang03 <huangyx0321@gmail.com>

* [doc] show the count for fork and watch (vllm-project#18950)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Docs] Update SECURITY.md with link to our security guide (vllm-project#18961)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Improve "failed to get the hash of the compiled graph" error (vllm-project#18956)

Signed-off-by: rzou <zou3519@gmail.com>

* [Perf] API-server scaleout with many-to-many server-engine comms  (vllm-project#17546)

* Benchmark script for fp8 vs bf16 gemm (vllm-project#17126)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [VLM] Add PP support and fix GPTQ inference for Ovis models (vllm-project#18958)

Signed-off-by: isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <2037008807@qq.com>

* [Misc] add group_size is -1 in awq quantization (vllm-project#18910)

Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>

* Tool parser regex timeout handling (vllm-project#18960)

Signed-off-by: Will Eaton <weaton@redhat.com>

* [Docs] Correct multiprocessing design doc (vllm-project#18964)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* create util function for batched arange (vllm-project#18937)

* [Frontend] Add rerank support to run_batch endpoint (vllm-project#16278)

Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>

* [Misc] Fix estimated max model len msg (vllm-project#18966)

Signed-off-by: Yong Hoon Shin <yhshin@meta.com>

* [Bugfix]: Fix the incompatibility issue with Structured Outputs when Thinking is disabled (vllm-project#18879)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* fix security issue of logging llm output (vllm-project#18980)

Signed-off-by: Lu Fang <fanglu@fb.com>
Co-authored-by: Lucia (Lu) Fang <fanglu@meta.com>

* [Neuron] Add Multi-Modal model support for Neuron (vllm-project#18921)

Signed-off-by: Satyajith Chilappagari <satchill@amazon.com>
Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>
Co-authored-by: Rohith Nallamaddi <nalrohit@amazon.com>
Co-authored-by: FeliciaLuo <luof@amazon.com>
Co-authored-by: Elaine Zhao <elaineyz@amazon.com>

* [doc] fix the list rendering issue - security.md (vllm-project#18982)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [BugFix] Pydantic part 2 (vllm-project#18911)

Signed-off-by: luka <luka@neuralmagic.com>

* [FEAT][ROCm] Add AITER grouped topk for DeepSeekV2 (vllm-project#18825)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Bugfix] Fix for issue 17396 (vllm-project#18773)

Signed-off-by: Fred Reiss <frreiss@us.ibm.com>

* [ROCm][Kernel] Add gfx950 support for skinny gemms (vllm-project#18010)

Signed-off-by: charlifu <charlifu@amd.com>

* [P/D] NixlConnector use cache device index for memory registration (vllm-project#18969)

Signed-off-by: Piotr Tarasiewicz <ptarasiewicz@nvidia.com>

* [BugFix] Fix multi-node offline data-parallel (vllm-project#18981)

Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Yizhou Liu <liu_yizhou@outlook.com>

* [Misc] add return token strs for tokenize (vllm-project#18941)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc][Benchmark] Add support for CustomDataset (vllm-project#18511)

* [Bugfix] Fix EAGLE3 broken logits (vllm-project#18909)

Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>

* [Core] Rework dtype resolution (vllm-project#18751)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [LoRA] Support dynamically initialize `packed_modules_mapping` for VLM with arbitrary components (vllm-project#18987)

Signed-off-by: isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <2037008807@qq.com>

* [doc] small fix -  mkdocs (vllm-project#18996)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Let max_num_batched_tokens use human_readable_int for large numbers (vllm-project#18968)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [BugFix] fix data parallel construct ipv6 url addres (vllm-project#18991)

Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>

* [BugFix] Fix incorrect metrics shutdown error log message (vllm-project#18992)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [doc] wrong output (vllm-project#19000)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc] reuse num_tokens_across_dp of get_dp_padding to avoid unnecessary dp all reduce in set_forward_context (vllm-project#18935)

Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: zhuhaoran <zhuhaoran.zhr@alibaba-inc.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>

* [Bugfix][Nixl] Fix DP Metadata Handshake (vllm-project#19008)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* [Core] Support inplace model weights loading (vllm-project#18745)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [doc] add pytest tips (vllm-project#19010)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Model] enable data parallel for Llama4 vision encoder (vllm-project#18368)

Signed-off-by: yzhen <yzhen@devgpu093.cco2.facebook.com>
Co-authored-by: yZhen <yZhen@fb.com>
Co-authored-by: yzhen <yzhen@devgpu093.cco2.facebook.com>

* [Frontend] enable custom logging for the uvicorn server (OpenAI API server) (vllm-project#18403)

Signed-off-by: François Paupier <francois.paupier@gmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Bugfix][Model] Attempt to fix eagle in V0. (vllm-project#18978)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* add an absolute path for run.sh (vllm-project#18258)

Signed-off-by: calvin chen <120380290@qq.com>

* [Hardware][TPU] Initial support of model parallelism with single worker using SPMD (vllm-project#18011)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Hossein Sarshar <hossein.sarshar@gmail.com>
Co-authored-by: Chengji Yao <chengjiyao@google.com>

* [Doc] Remove duplicate TOCs during MkDocs migration (vllm-project#19021)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* [Bugfix][EP+DP] Use pplx-kernel internode instead of intranode (vllm-project#19034)

Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Adding "LoRA Test %N" to AMD production tests (vllm-project#18929)

Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>

* [CPU][CI] Re-enable the CPU CI tests (vllm-project#19046)

Signed-off-by: jiang.li <jiang1.li@intel.com>

* [ROCm][Build] Clean up the ROCm build (vllm-project#19040)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [V1] Support DP with Ray (vllm-project#18779)

* Add tarsier model support (vllm-project#18985)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [bugfix] small fix logic issue (vllm-project#18999)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Reduce logs in CLI scripts and plugin loader (vllm-project#18970)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Bugfix] Use cmake 3.26.1 instead of 3.26 to avoid build failure (vllm-project#19019)

Signed-off-by: Lu Fang <lufang@fb.com>

* [v1][KVCacheManager] Rename BlockHashType to BlockHash (vllm-project#19015)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* Update docker docs with ARM CUDA cross-compile (vllm-project#19037)

Signed-off-by: mgoin <michael@neuralmagic.com>

* [Doc] Add InternVL LoRA support  (vllm-project#19055)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Misc] Update `WeightsMapper` for qwen2-vl/qwen2.5-vl (vllm-project#19054)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Doc] Update V1 user guide for embedding and enc-dec models (vllm-project#19060)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [doc] clarify windows support (vllm-project#19088)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [CI/Build] Remove V0 LoRA test (vllm-project#19066)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* Fix underscores in dict keys passed via CLI (vllm-project#19030)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Bugfix] disable processor cache  (vllm-project#19068)

Signed-off-by: raushan <raushan@huggingface.co>

* [Doc] Improve the Pull Request template with key components (vllm-project#19086)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Misc] Add missing `_Backend` enums (vllm-project#19081)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [Misc] fix: add miss best_of param validation (vllm-project#18555)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [Misc] Add SPDX-FileCopyrightText  (vllm-project#19100)

Signed-off-by: simon-mo <simon.mo@hey.com>

* [Doc] Readme standardization (vllm-project#18695)

Co-authored-by: Soren Dreano <soren@numind.ai>

* [doc] update docker version (vllm-project#19074)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Kernel] DeepEP dispatch-combine kernel integration (vllm-project#18434)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>

* [V1] Support cross-layer KV sharing (vllm-project#18212)

Signed-off-by: Yong Hoon Shin <yhshin@meta.com>

* [Perf] Tune `scaled_fp8_quant` by increasing vectorization (vllm-project#18844)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Fix interaction between `Optional` and `Annotated` in CLI typing (vllm-project#19093)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yikun Jiang <yikun@apache.org>

* [v1] Re-init input batch for multiple kv cache groups (vllm-project#18654)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [V1][Spec Decode][Ngram] 1.35x gain -> 1.95x gain on InstructCoder with prompt fix (vllm-project#18971)

* [Bugfix] get_num_blocks_to_allocate with null_block (vllm-project#19031)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Bugfix]: Fix the incompatibility issue with tool_choice 'required' when Thinking is enabled (vllm-project#19075)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Bugfix][P/D] Fix Prefix Cache Bug (vllm-project#18411)

Signed-off-by: nicklucche <nlucches@redhat.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>

* [Bugfix] Max concurrency estimation and check_enough_kv_cache_memory for models with sliding window layers (vllm-project#19029)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* feat: add data parallel rank to KVEventBatch (vllm-project#18925)

* [Misc] Fix path and python alias errors in disagg_prefill exmaples (vllm-project#18919)

* [Docs] Add developer doc about CI failures (vllm-project#18782)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [CPU] V1 support for the CPU backend (vllm-project#16441)

* [Core] Cast multimodal input in hf processor (vllm-project#18862)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [KERNEL] Sampler. CUDA kernel for applying repetition penalty (vllm-project#18437)

* [Cleanup][v1]:remote guided-decoding-backend for example (vllm-project#19059)

Signed-off-by: calvin chen <120380290@qq.com>

* [NVIDIA] Add Cutlass MLA backend (vllm-project#17625)

* [Bugfix] Fix FA3 full cuda graph correctness (vllm-project#19106)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* Fix vllm-project#19130 (vllm-project#19132)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [TPU] Skip hanging tests (vllm-project#19115)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* Fix ValueError: Missing value for tag key(s): model_name,engine. (vllm-project#19113)

Signed-off-by: Seiji Eicher <seiji@anyscale.com>

* [Misc] Add packages for benchmark as extra dependency (vllm-project#19089)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Improve the output precision of embedding models (vllm-project#19092)

* [CI/Build][Bugfix] Ensure compatibility with transformers 4.52 (vllm-project#18678)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Add DeepSeek-R1-0528 function call chat template (vllm-project#18874)

Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>

* Sm100 blockwise fp8 swap ab (vllm-project#18564)

* [Doc] Update V1 Guide for embedding models (vllm-project#19141)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Allow AsyncLLMEngine.generate to target a specific DP rank (vllm-project#19102)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* [Bugfix][EP+DP] Fix internode check (vllm-project#19112)

Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>

* [Perf] Tunings for SM100 FP8 CUTLASS kernel (vllm-project#18778)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [TPU] Update dynamo dump file name in compilation test (vllm-project#19108)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [Bugfix] fix v1 cpu worker fails on macOS (vllm-project#19121)

* [Kernel] Integrate batched/masked deepgemm kernel (vllm-project#19111)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun <vsundarr@redhat.com>

* [Misc] refactor: simplify EngineCoreClient.make_async_mp_client in AsyncLLM (vllm-project#18817)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [P/D] Heterogeneous TP (vllm-project#18833)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [doc] small fix (vllm-project#19167)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix][Nixl] Fix full prefix cache hit bug (vllm-project#18632)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix port handling in make_zmq_path (vllm-project#19117)

* [Torch Nightly]add missing dependency (vllm-project#18770)

Signed-off-by: Yang Wang <elainewy@meta.com>

* Handle non-serializable objects when dumping benchmark results (vllm-project#19114)

* [BugFix][Minor] Fix full cuda graph bug when max_num_seqs < 512 (vllm-project#19171)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Bugfix]: Fix the incompatibility issue with stream when Thinking is disabled (vllm-project#19135)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Build] Annotate wheel and container path for release workflow (vllm-project#19162)

Signed-off-by: simon-mo <simon.mo@hey.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* [Misc] Remove unnecessary fallback to prefill-decode attention (vllm-project#19138)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Misc] Do not override NCCL_CUMEM_ENABLE if set explicitly (vllm-project#19105)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Frontend] improve vllm run-batch --help display (vllm-project#19187)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] properly catch PIL-related errors for vision models when incorrect data urls are provided (vllm-project#19202)

Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>

* [mistral_common] Add v11 tokenizer (vllm-project#19193)

Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for DeepSeek-R1/V3 (vllm-project#19205)

* [Hardware][NVIDIA] FP4 MoE kernel optimization (vllm-project#19110)

Signed-off-by: Chiyue Wei <chiyuew@nvidia.com>
Co-authored-by: Chiyue Wei <chiyuew@nvidia.com>

* [MISC][Bugfix] Use less CPU when message queue has been empty for some time (vllm-project#16226)

Signed-off-by: Povilas Kanapickas <povilas@radix.lt>

* [P/D][NixlConnector] Enable FlashInfer backend (vllm-project#19090)

* [Quantization] Skip Fp4 Test for `compressed-tensors` (vllm-project#19217)

* [V1] Use FlashInfer by default on Blackwell GPUs (vllm-project#19118)

* [Model] NemotronH support (vllm-project#18863)

Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>

* Fix AOPerModuleConfig name changes (vllm-project#18869)

Signed-off-by: Jerry Zhang <jerryzh168@gmail.com>

* [Bugfix] Fix EAGLE vocab embedding construction for Llama 70B (vllm-project#19033)

Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>

* [v1] Hybrid Memory Allocator (vllm-project#17996)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [TPU] update torch_xla pin (vllm-project#19231)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* Support allowed_token_ids in ChatCompletionRequest (vllm-project#19143)

Signed-off-by: Xu Song <xusong.vip@gmail.com>

* [Chore] update CODEOWNERS (vllm-project#19247)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [v1][P/D] Fix a edge case in kv cache schedule (vllm-project#19182)

Co-authored-by: jinghui <jinghui@fb.com>

* [TPU] fix kv cache dtype in model runner (vllm-project#19244)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Quantization] Bump compressed-tensors version; update NVFP4A16 test model (vllm-project#19224)

Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com>

* [Docs] Improve V1 KVConnector interface documentation (vllm-project#19172)

Signed-off-by: Nick Hill <nhill@redhat.com>

* Fix CompilationConfig repr (vllm-project#19091)

Signed-off-by: rzou <zou3519@gmail.com>

* Unit Test for run_dp_sharded_vision_model (vllm-project#19103)

Signed-off-by: Siqi Yan <siqi@meta.com>
Co-authored-by: Siqi Yan <siqi@meta.com>

* [Model] Optimize nemotron_h implementation (vllm-project#19249)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Core] Raise when non-multi-instance DP clients target a DP rank (vllm-project#19227)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* improve logits bias (vllm-project#19041)

* Fixed ppc build when it runs on non-RHEL based linux distros (vllm-project#18422)

Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>

* [BugFix] Fix MultiConnector test after HMA changes (vllm-project#19291)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix][Core] Update cancellation logic in `generate()` to handle Generator exits (vllm-project#19225)

Co-authored-by: Adolfo Victoria <adovi@meta.com>

* [Core] Fix abrupt request abort (vllm-project#18485)

Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>

Co-authored-by: Nick Hill <nhill@redhat.com>

* [BugFix] Fix tpu_model_runner block_id concatenation (vllm-project#19228)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Misc][Tools][Benchmark] Fix and improve auto tune script (vllm-project#19163)

Signed-off-by: Chenyaaang <chenyangli@google.com>

* [Build][ROCm] Update Dockerfile.rocm (vllm-project#19296)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>

* [Easy][Test] Simplify test_function_tool_use with multiple parametrizes (vllm-project#19269)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Kernel] Integrate CUTLASS MoE kernel with PPLX (vllm-project#18762)

Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [TPU][Test] Add script to run benchmark on TPU for buildkite (vllm-project#19039)

Signed-off-by: Qiliang Cui <derrhein@gmail.com>

* [CI][PowerPC] Use a more appropriate way to select testcase in tests/models/language/pooling/test_embedding.py (vllm-project#19253)

Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>

* Add FlexAttention to V1 (vllm-project#16078)

Signed-off-by: drisspg <drisspguessous@gmail.com>

* [Misc] refactor context extension (vllm-project#19246)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [CI/Build] Improve Llama GGUF test robustness (vllm-project#19287)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Nit][Benchmark]Fix example in benchmark_serving_structured_output.py (vllm-project#19311)

Signed-off-by: Lifan Shen <lifans@meta.com>

* [AMD] Update compatible packaging version (vllm-project#19309)

Signed-off-by: pramkuma <Pramendra.Kumar@amd.com>

* [BugFix][V1] Fix memory profiling bug (vllm-project#18974)

Signed-off-by: luka <luka@neuralmagic.com>

* [Bugfix]: Fix TypeError: 'float' object cannot be interpreted as an integer (vllm-project#19283)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Bugfix] Re-enable use_cudagraph in vLLM v1 (vllm-project#19299)

Signed-off-by: Richard Zou <zou3519@gmail.com>

* [Misc] Change tests/compile to use VLLM_V1 by default (vllm-project#19302)

Signed-off-by: rzou <zou3519@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B (vllm-project#19315)

Signed-off-by: Xu Wenqing <xuwq1993@qq.com>

* [Hardware][POWER] Add IBM POWER11 Support to CPU Extension Detection (vllm-project#19082)

Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>

* [Quantization] Add compressed-tensors NVFP4 support (vllm-project#18312)

* [Multi Modal] Add an env var for message queue max chunk bytes  (vllm-project#19242)

Signed-off-by: yZhen <yZhen@fb.com>
Co-authored-by: yZhen <yZhen@fb.com>

* [Bugfix] model_max_length should consider max_model_len in tokenizer_config (vllm-project#19201)

* [Deprecation] Remove `inputs` arg fallback in Engine classes (vllm-project#18799)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Add documentation update reminder to PR template (vllm-project#19289)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Frontend] Remove unreachable code from llm.py (vllm-project#19288)

Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com>

* [Misc] Cleanup compilation tests (vllm-project#19343)

Signed-off-by: rzou <zou3519@gmail.com>

* [doc] improve ci doc (vllm-project#19307)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Doc] Fix description in the Automatic Prefix Caching design doc (vllm-project#19333)

Signed-off-by: cr7258 <chengzw258@163.com>

* [CI/Build] Fix LoRA test (vllm-project#19350)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Fix] Allow kernel compilation for CUDA capability 8.7 (vllm-project#19328)

Signed-off-by: Conroy Cheers <conroy@corncheese.org>

* [CI] Introduce rules for llama auto-label (vllm-project#19323)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Docs] Fix a bullet list in usage/security.md (vllm-project#19358)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* [full_graph] Fix query_start_loc padding (vllm-project#19321)

Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>

* [v1] Add fp32 support to v1 engine through flex attn (vllm-project#19319)

Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Misc] Fixes and Optimizations for DeepEP + DeepGEMM combination. (vllm-project#19298)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun <vsundarr@redhat.com>

* [Bugfix][Core] Prevent token lengths exceeding `max_model_len` in V0 (vllm-project#19348)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Quantization] Bump compressed-tensors version (vllm-project#19295)

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

* [Frontend] Make TIMEOUT_KEEP_ALIVE configurable through env var (vllm-project#18472)

Signed-off-by: liusiqian <liusiqian@tal.com>

* [TPU]Fix KV cache sharing tests (vllm-project#19371)

* [HOT-FIX] Add `kv_sharing_target_layer_name` argument to cutlass_mla backend (vllm-project#19374)

Signed-off-by: Pavani Majety <pmajety@nvidia.com>

* [Misc] Fix a config typo in disable_hybrid_kv_cache_manager configuration (vllm-project#19383)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [V1] Reuse V0's memory_profiling util for gpu worker memory profiling (vllm-project#19312)

Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>

* [Bugfix] Fix benchmark_moe.py (vllm-project#19016)

Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>

* Use xla flag to improve the quantized model performance (vllm-project#19303)

Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>

* Fix docs/mkdocs/hooks/remove_announcement.py (vllm-project#19382)

* [Frontend] Add tqdm_leave_pbar to control progress bar visibility (vllm-project#19357)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Core] Use tuple for kv cache group block ids (vllm-project#19175)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix modelscope token passed in (vllm-project#19389)

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>

* [Core] Batch multi modal input using pinned memory (vllm-project#19169)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* Add security warning to bug report template (vllm-project#19365)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Misc] refactor neuron_multimodal and profiling (vllm-project#19397)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Add clear documentation around the impact of debugging flag (vllm-project#19369)

Signed-off-by: Anna Pendleton <pendleton@google.com>

* Automatically bind CPU OMP Threads of a rank to CPU ids of a NUMA node. (vllm-project#17930)

Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Co-authored-by: Li, Jiang <bigpyj64@gmail.com>

* Revert "[v1] Add fp32 support to v1 engine through flex attn" (vllm-project#19404)

* [BugFix][FlashInfer] Fix attention backend interface mismatch with unexpected keyword `use_irope` (vllm-project#19134)

Signed-off-by: Yunqiu Guo <guorachel@meta.com>

* [BugFix][CPU] Fix CPU CI by ignore collecting test_pixtral (vllm-project#19411)

Signed-off-by: jiang.li <jiang1.li@intel.com>

* Simplify ep kernels installation (vllm-project#19412)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Slight improvement of the BNB  (vllm-project#19418)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* fix

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

* fix

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

* update config

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

* add

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

---------

Signed-off-by: Hongxia Yang <hongxia.yang@amd.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: rzou <zou3519@gmail.com>
Signed-off-by: Chengji Yao <chengjiyao@google.com>
Signed-off-by: Elaine Zhao <elaineyz@amazon.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Signed-off-by: Brent Salisbury <bsalisbu@redhat.com>
Signed-off-by: Satyajith Chilappagari <satchill@amazon.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
Signed-off-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>
Signed-off-by: luka <luka@neuralmagic.com>
Signed-off-by: Chenyaaang <chenyangli@google.com>
Signed-off-by: Duyi-Wang <duyi.wang@intel.com>
Signed-off-by: Zerohertz <ohg3417@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: googs1025 <googs1025@gmail.com>
Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Will Eaton <weaton@redhat.com>
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Carol Zheng <cazheng@google.com>
Signed-off-by: wenhuach21 <wenhua.cheng@intel.com>
Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
Signed-off-by: iLeGend <824040212@qq.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Signed-off-by: rabi <ramishra@redhat.com>
Signed-off-by: Daniele Trifirò <dtrifiro@redhat.com>
Signed-off-by: huangyuxiang03 <huangyx0321@gmail.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: isotr0py <2037008807@qq.com>
Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>
Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: Lu Fang <fanglu@fb.com>
Signed-off-by: Fred Reiss <frreiss@us.ibm.com>
Signed-off-by: charlifu <charlifu@amd.com>
Signed-off-by: Piotr Tarasiewicz <ptarasiewicz@nvidia.com>
Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Signed-off-by: yzhen <yzhen@devgpu093.cco2.facebook.com>
Signed-off-by: François Paupier <francois.paupier@gmail.com>
Signed-off-by: calvin chen <120380290@qq.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Signed-off-by: jiang.li <jiang1.li@intel.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Signed-off-by: Lu Fang <lufang@fb.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: mgoin <michael@neuralmagic.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: raushan <raushan@huggingface.co>
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Varun <vsundarr@redhat.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>
Signed-off-by: Jon Swenson <jmswen@gmail.com>
Signed-off-by: Yang Wang <elainewy@meta.com>
Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>
Signed-off-by: Chiyue Wei <chiyuew@nvidia.com>
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Signed-off-by: Jerry Zhang <jerryzh168@gmail.com>
Signed-off-by: Xu Song <xusong.vip@gmail.com>
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com>
Signed-off-by: Siqi Yan <siqi@meta.com>
Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: Qiliang Cui <derrhein@gmail.com>
Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>
Signed-off-by: drisspg <drisspguessous@gmail.com>
Signed-off-by: Lifan Shen <lifans@meta.com>
Signed-off-by: pramkuma <Pramendra.Kumar@amd.com>
Signed-off-by: Richard Zou <zou3519@gmail.com>
Signed-off-by: Xu Wenqing <xuwq1993@qq.com>
Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Signed-off-by: yZhen <yZhen@fb.com>
Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com>
Signed-off-by: cr7258 <chengzw258@163.com>
Signed-off-by: Conroy Cheers <conroy@corncheese.org>
Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: liusiqian <liusiqian@tal.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>
Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>
Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Anna Pendleton <pendleton@google.com>
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Signed-off-by: Yunqiu Guo <guorachel@meta.com>
Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>
Co-authored-by: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Richard Zou <zou3519@users.noreply.github.com>
Co-authored-by: Chengji Yao <chengjiyao@google.com>
Co-authored-by: aws-elaineyz <elaineyz@amazon.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Brent Salisbury <bsalisbu@redhat.com>
Co-authored-by: Satyajith Chilappagari <satchill@amazon.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Michael Yao <haifeng.yao@daocloud.io>
Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: Lukas Geiger <lukas.geiger94@gmail.com>
Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com>
Co-authored-by: Chenyaaang <42742451+Chenyaaang@users.noreply.github.com>
Co-authored-by: Duyi-Wang <duyi.wang@intel.com>
Co-authored-by: Hyogeun Oh (오효근) <ohg3417@gmail.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: CYJiang <86391540+googs1025@users.noreply.github.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Will Eaton <weaton@redhat.com>
Co-authored-by: Will Eaton <wseaton@users.noreply.github.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Carol Zheng <cazheng@google.com>
Co-authored-by: Wenhua Cheng <wenhua.cheng@intel.com>
Co-authored-by: Chauncey <chaunceyjiang@gmail.com>
Co-authored-by: iLeGend <youzhi.jin@intel.com>
Co-authored-by: H <linhaibin.eric@gmail.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: Rabi Mishra <ramishra@redhat.com>
Co-authored-by: Always-Naive <97138029+Always-Naive@users.noreply.github.com>
Co-authored-by: Daniele <36171005+dtrifiro@users.noreply.github.com>
Co-authored-by: Shawn Huang <57223022+huangyuxiang03@users.noreply.github.com>
Co-authored-by: huangyuxiang03 <huangyx0321@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: rongfu.leng <rongfu.leng@daocloud.io>
Co-authored-by: Yu Guo <82124926+yuguo68@users.noreply.github.com>
Co-authored-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Co-authored-by: Yong Hoon Shin <48474650+sarckk@users.noreply.github.com>
Co-authored-by: Lucia Fang <116399278+luccafong@users.noreply.github.com>
Co-authored-by: Lucia (Lu) Fang <fanglu@meta.com>
Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>
Co-authored-by: Rohith Nallamaddi <nalrohit@amazon.com>
Co-authored-by: FeliciaLuo <luof@amazon.com>
Co-authored-by: Fred Reiss <frreiss@us.ibm.com>
Co-authored-by: Charlie Fu <charlifu@amd.com>
Co-authored-by: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Co-authored-by: Yizhou Liu <liu_yizhou@outlook.com>
Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Co-authored-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Co-authored-by: zhrrr <43847754+izhuhaoran@users.noreply.github.com>
Co-authored-by: zhuhaoran <zhuhaoran.zhr@alibaba-inc.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>
Co-authored-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Co-authored-by: jennyyyyzhen <47012288+jennyyyyzhen@users.noreply.github.com>
Co-authored-by: yZhen <yZhen@fb.com>
Co-authored-by: yzhen <yzhen@devgpu093.cco2.facebook.com>
Co-authored-by: Frαnçois <francois.paupier@gmail.com>
Co-authored-by: Calvin Chen <45745657+calvin0327@users.noreply.github.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Hossein Sarshar <hossein.sarshar@gmail.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Concurrensee <yidawu@alumni.cmu.edu>
Co-authored-by: Li, Jiang <jiang1.li@intel.com>
Co-authored-by: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: SorenDreano <71752785+SorenDreano@users.noreply.github.com>
Co-authored-by: Soren Dreano <soren@numind.ai>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Yikun Jiang <yikun@apache.org>
Co-authored-by: Yan Ru Pei <yanrpei@gmail.com>
Co-authored-by: Jiaxin Shan <seedjeffwan@gmail.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Vadim Gimpelson <156319763+vadiklyutiy@users.noreply.github.com>
Co-authored-by: Kaixi Hou <kaixih@nvidia.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: Seiji Eicher <58963096+eicherseiji@users.noreply.github.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Co-authored-by: Xu Wenqing <121550081+Xu-Wenqing@users.noreply.github.com>
Co-authored-by: Lain <fusiyuan2000@hotmail.com>
Co-authored-by: jmswen <jmswen@users.noreply.github.com>
Co-authored-by: Kebe <mail@kebe7jun.com>
Co-authored-by: Yang Wang <elainewy@meta.com>
Co-authored-by: Huy Do <huydhn@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Chiyue Wei <92623189+dubcyfor3@users.noreply.github.com>
Co-authored-by: Chiyue Wei <chiyuew@nvidia.com>
Co-authored-by: Povilas Kanapickas <povilas@radix.lt>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: Luis Vega <vegaluisjose@users.noreply.github.com>
Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Co-authored-by: Jerry Zhang <jerryzh168@gmail.com>
Co-authored-by: Xu Song <xusong.vip@gmail.com>
Co-authored-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: Jinghui Zhang <jinghuizhang0804@gmail.com>
Co-authored-by: jinghui <jinghui@fb.com>
Co-authored-by: Siqi Yan <ysq0807@hotmail.com>
Co-authored-by: Siqi Yan <siqi@meta.com>
Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Co-authored-by: Adolfo Victoria <adolfokarim@gmail.com>
Co-authored-by: Adolfo Victoria <adovi@meta.com>
Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
Co-authored-by: ElizaWszola <ewszola@redhat.com>
Co-authored-by: QiliangCui <derrhein@gmail.com>
Co-authored-by: Aaruni Aggarwal <47731267+AaruniAggarwal@users.noreply.github.com>
Co-authored-by: Driss Guessous <32754868+drisspg@users.noreply.github.com>
Co-authored-by: Lifans <draftbks@gmail.com>
Co-authored-by: pramenku <7664080+pramenku@users.noreply.github.com>
Co-authored-by: Akash kaothalkar <61960177+Akashcodes732@users.noreply.github.com>
Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Co-authored-by: Kseniya Parkhamchuk <43078183+KsuParkhamchuk@users.noreply.github.com>
Co-authored-by: Se7en <chengzw258@163.com>
Co-authored-by: Conroy Cheers <conroy@corncheese.org>
Co-authored-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: liusiqian-tal <141730978+liusiqian-tal@users.noreply.github.com>
Co-authored-by: Pavani Majety <pmajety@nvidia.com>
Co-authored-by: Ye (Charlotte) Qi <yeq@meta.com>
Co-authored-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>
Co-authored-by: XiongfeiWei <isaacwxf23@gmail.com>
Co-authored-by: Li Wang <wangli858794774@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Anna Pendleton <pendleton@google.com>
Co-authored-by: Louie Tsai <louie.tsai@intel.com>
Co-authored-by: Li, Jiang <bigpyj64@gmail.com>
Co-authored-by: Rachel Guo <35738743+YUNQIUGUO@users.noreply.github.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
tanujtiwari1998 pushed a commit to character-tech/vllm that referenced this pull request Jul 7, 2025
…quest (#12)

* [Bugfix][ROCm] fix the power of 2 exception from triton_unified_attention.py when running llama4 models and unit test fix (vllm-project#18100)

Signed-off-by: Hongxia Yang <hongxia.yang@amd.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>

* Prevent the cross-encoder logic from being applied to classification tasks (vllm-project#18838)

Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* Add ability to use CUDAGraphs with use_inductor=False (vllm-project#17345)

Signed-off-by: rzou <zou3519@gmail.com>

* [Bugfix][TPU] fix moe custom kernel import (vllm-project#18853)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Doc][Neuron] Update documentation for Neuron (vllm-project#18868)

Signed-off-by: Elaine Zhao <elaineyz@amazon.com>

* Skip device and quant Pydantic validation to make plugin device work (vllm-project#18843)

Signed-off-by: Yikun Jiang <yikunkero@gmail.com>

* Fixes a dead link in nightly benchmark readme (vllm-project#18856)

Signed-off-by: Brent Salisbury <bsalisbu@redhat.com>

* [Neuron] Add multi-LoRA support for Neuron. (vllm-project#18284)

Signed-off-by: Satyajith Chilappagari <satchill@amazon.com>

* [LoRA] Add LoRA support for InternVL  (vllm-project#18842)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Doc] Remove redundant spaces from compatibility_matrix.md (vllm-project#18891)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* [doc] add CLI doc (vllm-project#18871)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] Fix misleading information in the documentation (vllm-project#18845)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Misc] Replace TODO in serving transcription (vllm-project#18895)

Signed-off-by: NickLucche <nlucches@redhat.com>

* [Bugfix] Ensure tensors are contiguous during serialisation (vllm-project#18860)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [BugFix] Update pydantic to fix error on python 3.10 (vllm-project#18852)

Signed-off-by: luka <luka@neuralmagic.com>

* Fix an error in dummy weight loading for quantization models (vllm-project#18855)

Signed-off-by: Chenyaaang <chenyangli@google.com>

* [Misc][Tools][Benchmark] Add benchmark_serving supports for llama.cpp.  (vllm-project#18692)

Signed-off-by: Duyi-Wang <duyi.wang@intel.com>

* [Doc]  Fix codeblocks formatting in LoRA adapters documentation (vllm-project#18907)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* [Bugfix] Fix the failing gte embedding test (vllm-project#18720)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Attention][V1] Toggle for v1 attention backend (vllm-project#18275)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [ROCm][V0][Attention] Revert to the previous FA triton kernel (vllm-project#18226)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [Deprecation] Disallow pos-args other than `model` when initializing `LLM` (vllm-project#18802)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Remove duplicate init for self.vllm_config (vllm-project#18896)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [V1] Allocate kv_cache with stride order for V1 (vllm-project#18775)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [BugFix] Make DP work with connector-delayed new requests (vllm-project#18559)

Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Will Eaton <weaton@redhat.com>

* [P/D] NixlConnector DP fixes (vllm-project#18903)

Signed-off-by: Will Eaton <weaton@redhat.com>

* Use standalone_compile by default in torch >= 2.8.0 (vllm-project#18846)

Signed-off-by: rzou <zou3519@gmail.com>

* [TPU] remove transpose ops in moe kernel (vllm-project#18923)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Bugfix] Fix PP default fallback behavior for V1 (vllm-project#18915)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Misc] Update type annotation for rotary embedding `base` (vllm-project#18914)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [TPU][CI/CD] Clean up docker for TPU tests. (vllm-project#18926)

Signed-off-by: Carol Zheng <cazheng@google.com>

* improve the robustness of parsing vlms config in AutoRound (vllm-project#18894)

Signed-off-by: wenhuach21 <wenhua.cheng@intel.com>

* [Bugfix] Consistent ascii handling in tool parsers (vllm-project#18883)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Model] Use AutoWeightsLoader for mamba2 (vllm-project#18918)

Signed-off-by: iLeGend <824040212@qq.com>

* [docs] fix: fix markdown syntax (vllm-project#18927)

* [ROCm] Remove unnecessary assertion of max_model_len in ROCM_AITER_MLA attention backend. (vllm-project#18938)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Bugfix] Remove NVFP4 scales assertions to fix load_format=dummy (vllm-project#18861)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Deprecation] Remove mean pooling default for `Qwen2EmbeddingModel` (vllm-project#18913)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc]Fix benchmarks/README.md for speculative decoding (vllm-project#18897)

Signed-off-by: rabi <ramishra@redhat.com>

* [doc] add mkdocs doc (vllm-project#18930)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Model] Use in-place adds in SigLIP (vllm-project#18922)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [Bugfix][Failing Test] Fix test_vllm_port.py (vllm-project#18618)

Signed-off-by: rabi <ramishra@redhat.com>

* [Misc]Fix typo (vllm-project#18947)

* [Bugfix][TPU] Fix tpu model runner testcase failure (vllm-project#18810)

Signed-off-by: Carol Zheng <cazheng@google.com>

* [CI/Build] remove regex from build dependencies (vllm-project#18945)

Signed-off-by: Daniele Trifirò <dtrifiro@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Feature] minicpm eagle support (vllm-project#18943)

Signed-off-by: huangyuxiang03 <huangyx0321@gmail.com>
Co-authored-by: huangyuxiang03 <huangyx0321@gmail.com>

* [doc] show the count for fork and watch (vllm-project#18950)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Docs] Update SECURITY.md with link to our security guide (vllm-project#18961)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Improve "failed to get the hash of the compiled graph" error (vllm-project#18956)

Signed-off-by: rzou <zou3519@gmail.com>

* [Perf] API-server scaleout with many-to-many server-engine comms  (vllm-project#17546)

* Benchmark script for fp8 vs bf16 gemm (vllm-project#17126)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [VLM] Add PP support and fix GPTQ inference for Ovis models (vllm-project#18958)

Signed-off-by: isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <2037008807@qq.com>

* [Misc] add group_size is -1 in awq quantization (vllm-project#18910)

Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>

* Tool parser regex timeout handling (vllm-project#18960)

Signed-off-by: Will Eaton <weaton@redhat.com>

* [Docs] Correct multiprocessing design doc (vllm-project#18964)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* create util function for batched arange (vllm-project#18937)

* [Frontend] Add rerank support to run_batch endpoint (vllm-project#16278)

Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>

* [Misc] Fix estimated max model len msg (vllm-project#18966)

Signed-off-by: Yong Hoon Shin <yhshin@meta.com>

* [Bugfix]: Fix the incompatibility issue with Structured Outputs when Thinking is disabled (vllm-project#18879)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* fix security issue of logging llm output (vllm-project#18980)

Signed-off-by: Lu Fang <fanglu@fb.com>
Co-authored-by: Lucia (Lu) Fang <fanglu@meta.com>

* [Neuron] Add Multi-Modal model support for Neuron (vllm-project#18921)

Signed-off-by: Satyajith Chilappagari <satchill@amazon.com>
Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>
Co-authored-by: Rohith Nallamaddi <nalrohit@amazon.com>
Co-authored-by: FeliciaLuo <luof@amazon.com>
Co-authored-by: Elaine Zhao <elaineyz@amazon.com>

* [doc] fix the list rendering issue - security.md (vllm-project#18982)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [BugFix] Pydantic part 2 (vllm-project#18911)

Signed-off-by: luka <luka@neuralmagic.com>

* [FEAT][ROCm] Add AITER grouped topk for DeepSeekV2 (vllm-project#18825)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Bugfix] Fix for issue 17396 (vllm-project#18773)

Signed-off-by: Fred Reiss <frreiss@us.ibm.com>

* [ROCm][Kernel] Add gfx950 support for skinny gemms (vllm-project#18010)

Signed-off-by: charlifu <charlifu@amd.com>

* [P/D] NixlConnector use cache device index for memory registration (vllm-project#18969)

Signed-off-by: Piotr Tarasiewicz <ptarasiewicz@nvidia.com>

* [BugFix] Fix multi-node offline data-parallel (
10000
vllm-project#18981)

Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Yizhou Liu <liu_yizhou@outlook.com>

* [Misc] add return token strs for tokenize (vllm-project#18941)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc][Benchmark] Add support for CustomDataset (vllm-project#18511)

* [Bugfix] Fix EAGLE3 broken logits (vllm-project#18909)

Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>

* [Core] Rework dtype resolution (vllm-project#18751)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [LoRA] Support dynamically initialize `packed_modules_mapping` for VLM with arbitrary components (vllm-project#18987)

Signed-off-by: isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <2037008807@qq.com>

* [doc] small fix -  mkdocs (vllm-project#18996)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Let max_num_batched_tokens use human_readable_int for large numbers (vllm-project#18968)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [BugFix] fix data parallel construct ipv6 url addres (vllm-project#18991)

Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>

* [BugFix] Fix incorrect metrics shutdown error log message (vllm-project#18992)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [doc] wrong output (vllm-project#19000)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc] reuse num_tokens_across_dp of get_dp_padding to avoid unnecessary dp all reduce in set_forward_context (vllm-project#18935)

Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: zhuhaoran <zhuhaoran.zhr@alibaba-inc.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>

* [Bugfix][Nixl] Fix DP Metadata Handshake (vllm-project#19008)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* [Core] Support inplace model weights loading (vllm-project#18745)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [doc] add pytest tips (vllm-project#19010)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Model] enable data parallel for Llama4 vision encoder (vllm-project#18368)

Signed-off-by: yzhen <yzhen@devgpu093.cco2.facebook.com>
Co-authored-by: yZhen <yZhen@fb.com>
Co-authored-by: yzhen <yzhen@devgpu093.cco2.facebook.com>

* [Frontend] enable custom logging for the uvicorn server (OpenAI API server) (vllm-project#18403)

Signed-off-by: François Paupier <francois.paupier@gmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Bugfix][Model] Attempt to fix eagle in V0. (vllm-project#18978)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* add an absolute path for run.sh (vllm-project#18258)

Signed-off-by: calvin chen <120380290@qq.com>

* [Hardware][TPU] Initial support of model parallelism with single worker using SPMD (vllm-project#18011)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Hossein Sarshar <hossein.sarshar@gmail.com>
Co-authored-by: Chengji Yao <chengjiyao@google.com>

* [Doc] Remove duplicate TOCs during MkDocs migration (vllm-project#19021)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* [Bugfix][EP+DP] Use pplx-kernel internode instead of intranode (vllm-project#19034)

Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Adding "LoRA Test %N" to AMD production tests (vllm-project#18929)

Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>

* [CPU][CI] Re-enable the CPU CI tests (vllm-project#19046)

Signed-off-by: jiang.li <jiang1.li@intel.com>

* [ROCm][Build] Clean up the ROCm build (vllm-project#19040)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [V1] Support DP with Ray (vllm-project#18779)

* Add tarsier model support (vllm-project#18985)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [bugfix] small fix logic issue (vllm-project#18999)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Reduce logs in CLI scripts and plugin loader (vllm-project#18970)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Bugfix] Use cmake 3.26.1 instead of 3.26 to avoid build failure (vllm-project#19019)

Signed-off-by: Lu Fang <lufang@fb.com>

* [v1][KVCacheManager] Rename BlockHashType to BlockHash (vllm-project#19015)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* Update docker docs with ARM CUDA cross-compile (vllm-project#19037)

Signed-off-by: mgoin <michael@neuralmagic.com>

* [Doc] Add InternVL LoRA support  (vllm-project#19055)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Misc] Update `WeightsMapper` for qwen2-vl/qwen2.5-vl (vllm-project#19054)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Doc] Update V1 user guide for embedding and enc-dec models (vllm-project#19060)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [doc] clarify windows support (vllm-project#19088)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [CI/Build] Remove V0 LoRA test (vllm-project#19066)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* Fix underscores in dict keys passed via CLI (vllm-project#19030)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Bugfix] disable processor cache  (vllm-project#19068)

Signed-off-by: raushan <raushan@huggingface.co>

* [Doc] Improve the Pull Request template with key components (vllm-project#19086)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Misc] Add missing `_Backend` enums (vllm-project#19081)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [Misc] fix: add miss best_of param validation (vllm-project#18555)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [Misc] Add SPDX-FileCopyrightText  (vllm-project#19100)

Signed-off-by: simon-mo <simon.mo@hey.com>

* [Doc] Readme standardization (vllm-project#18695)

Co-authored-by: Soren Dreano <soren@numind.ai>

* [doc] update docker version (vllm-project#19074)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Kernel] DeepEP dispatch-combine kernel integration (vllm-project#18434)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>

* [V1] Support cross-layer KV sharing (vllm-project#18212)

Signed-off-by: Yong Hoon Shin <yhshin@meta.com>

* [Perf] Tune `scaled_fp8_quant` by increasing vectorization (vllm-project#18844)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Fix interaction between `Optional` and `Annotated` in CLI typing (vllm-project#19093)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yikun Jiang <yikun@apache.org>

* [v1] Re-init input batch for multiple kv cache groups (vllm-project#18654)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [V1][Spec Decode][Ngram] 1.35x gain -> 1.95x gain on InstructCoder with prompt fix (vllm-project#18971)

* [Bugfix] get_num_blocks_to_allocate with null_block (vllm-project#19031)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Bugfix]: Fix the incompatibility issue with tool_choice 'required' when Thinking is enabled (vllm-project#19075)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Bugfix][P/D] Fix Prefix Cache Bug (vllm-project#18411)

Signed-off-by: nicklucche <nlucches@redhat.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>

* [Bugfix] Max concurrency estimation and check_enough_kv_cache_memory for models with sliding window layers (vllm-project#19029)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* feat: add data parallel rank to KVEventBatch (vllm-project#18925)

* [Misc] Fix path and python alias errors in disagg_prefill exmaples (vllm-project#18919)

* [Docs] Add developer doc about CI failures (vllm-project#18782)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [CPU] V1 support for the CPU backend (vllm-project#16441)

* [Core] Cast multimodal input in hf processor (vllm-project#18862)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [KERNEL] Sampler. CUDA kernel for applying repetition penalty (vllm-project#18437)

* [Cleanup][v1]:remote guided-decoding-backend for example (vllm-project#19059)

Signed-off-by: calvin chen <120380290@qq.com>

* [NVIDIA] Add Cutlass MLA backend (vllm-project#17625)

* [Bugfix] Fix FA3 full cuda graph correctness (vllm-project#19106)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* Fix vllm-project#19130 (vllm-project#19132)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [TPU] Skip hanging tests (vllm-project#19115)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* Fix ValueError: Missing value for tag key(s): model_name,engine. (vllm-project#19113)

Signed-off-by: Seiji Eicher <seiji@anyscale.com>

* [Misc] Add packages for benchmark as extra dependency (vllm-project#19089)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Improve the output precision of embedding models (vllm-project#19092)

* [CI/Build][Bugfix] Ensure compatibility with transformers 4.52 (vllm-project#18678)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Add DeepSeek-R1-0528 function call chat template (vllm-project#18874)

Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>

* Sm100 blockwise fp8 swap ab (vllm-project#18564)

* [Doc] Update V1 Guide for embedding models (vllm-project#19141)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Allow AsyncLLMEngine.generate to target a specific DP rank (vllm-project#19102)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* [Bugfix][EP+DP] Fix internode check (vllm-project#19112)

Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>

* [Perf] Tunings for SM100 FP8 CUTLASS kernel (vllm-project#18778)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [TPU] Update dynamo dump file name in compilation test (vllm-project#19108)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [Bugfix] fix v1 cpu worker fails on macOS (vllm-project#19121)

* [Kernel] Integrate batched/masked deepgemm kernel (vllm-project#19111)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun <vsundarr@redhat.com>

* [Misc] refactor: simplify EngineCoreClient.make_async_mp_client in AsyncLLM (vllm-project#18817)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [P/D] Heterogeneous TP (vllm-project#18833)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [doc] small fix (vllm-project#19167)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix][Nixl] Fix full prefix cache hit bug (vllm-project#18632)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix port handling in make_zmq_path (vllm-project#19117)

* [Torch Nightly]add missing dependency (vllm-project#18770)

Signed-off-by: Yang Wang <elainewy@meta.com>

* Handle non-serializable objects when dumping benchmark results (vllm-project#19114)

* [BugFix][Minor] Fix full cuda graph bug when max_num_seqs < 512 (vllm-project#19171)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Bugfix]: Fix the incompatibility issue with stream when Thinking is disabled (vllm-project#19135)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Build] Annotate wheel and container path for release workflow (vllm-project#19162)

Signed-off-by: simon-mo <simon.mo@hey.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* [Misc] Remove unnecessary fallback to prefill-decode attention (vllm-project#19138)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Misc] Do not override NCCL_CUMEM_ENABLE if set explicitly (vllm-project#19105)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Frontend] improve vllm run-batch --help display (vllm-project#19187)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] properly catch PIL-related errors for vision models when incorrect data urls are provided (vllm-project#19202)

Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>

* [mistral_common] Add v11 tokenizer (vllm-project#19193)

Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for DeepSeek-R1/V3 (vllm-project#19205)

* [Hardware][NVIDIA] FP4 MoE kernel optimization (vllm-project#19110)

Signed-off-by: Chiyue Wei <chiyuew@nvidia.com>
Co-authored-by: Chiyue Wei <chiyuew@nvidia.com>

* [MISC][Bugfix] Use less CPU when message queue has been empty for some time (vllm-project#16226)

Signed-off-by: Povilas Kanapickas <povilas@radix.lt>

* [P/D][NixlConnector] Enable FlashInfer backend (vllm-project#19090)

* [Quantization] Skip Fp4 Test for `compressed-tensors` (vllm-project#19217)

* [V1] Use FlashInfer by default on Blackwell GPUs (vllm-project#19118)

* [Model] NemotronH support (vllm-project#18863)

Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>

* Fix AOPerModuleConfig name changes (vllm-project#18869)

Signed-off-by: Jerry Zhang <jerryzh168@gmail.com>

* [Bugfix] Fix EAGLE vocab embedding construction for Llama 70B (vllm-project#19033)

Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>

* [v1] Hybrid Memory Allocator (vllm-project#17996)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [TPU] update torch_xla pin (vllm-project#19231)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* Support allowed_token_ids in ChatCompletionRequest (vllm-project#19143)

Signed-off-by: Xu Song <xusong.vip@gmail.com>

* [Chore] update CODEOWNERS (vllm-project#19247)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [v1][P/D] Fix a edge case in kv cache schedule (vllm-project#19182)

Co-authored-by: jinghui <jinghui@fb.com>

* [TPU] fix kv cache dtype in model runner (vllm-project#19244)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Quantization] Bump compressed-tensors version; update NVFP4A16 test model (vllm-project#19224)

Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com>

* [Docs] Improve V1 KVConnector interface documentation (vllm-project#19172)

Signed-off-by: Nick Hill <nhill@redhat.com>

* Fix CompilationConfig repr (vllm-project#19091)

Signed-off-by: rzou <zou3519@gmail.com>

* Unit Test for run_dp_sharded_vision_model (vllm-project#19103)

Signed-off-by: Siqi Yan <siqi@meta.com>
Co-authored-by: Siqi Yan <siqi@meta.com>

* [Model] Optimize nemotron_h implementation (vllm-project#19249)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Core] Raise when non-multi-instance DP clients target a DP rank (vllm-project#19227)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* improve logits bias (vllm-project#19041)

* Fixed ppc build when it runs on non-RHEL based linux distros (vllm-project#18422)

Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>

* [BugFix] Fix MultiConnector test after HMA changes (vllm-project#19291)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix][Core] Update cancellation logic in `generate()` to handle Generator exits (vllm-project#19225)

Co-authored-by: Adolfo Victoria <adovi@meta.com>

* [Core] Fix abrupt request abort (vllm-project#18485)

Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>

Co-authored-by: Nick Hill <nhill@redhat.com>

* [BugFix] Fix tpu_model_runner block_id concatenation (vllm-project#19228)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Misc][Tools][Benchmark] Fix and improve auto tune script (vllm-project#19163)

Signed-off-by: Chenyaaang <chenyangli@google.com>

* [Build][ROCm] Update Dockerfile.rocm (vllm-project#19296)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>

* [Easy][Test] Simplify test_function_tool_use with multiple parametrizes (vllm-project#19269)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Kernel] Integrate CUTLASS MoE kernel with PPLX (vllm-project#18762)

Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [TPU][Test] Add script to run benchmark on TPU for buildkite (vllm-project#19039)

Signed-off-by: Qiliang Cui <derrhein@gmail.com>

* [CI][PowerPC] Use a more appropriate way to select testcase in tests/models/language/pooling/test_embedding.py (vllm-project#19253)

Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>

* Add FlexAttention to V1 (vllm-project#16078)

Signed-off-by: drisspg <drisspguessous@gmail.com>

* [Misc] refactor context extension (vllm-project#19246)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [CI/Build] Improve Llama GGUF test robustness (vllm-project#19287)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Nit][Benchmark]Fix example in benchmark_serving_structured_output.py (vllm-project#19311)

Signed-off-by: Lifan Shen <lifans@meta.com>

* [AMD] Update compatible packaging version (vllm-project#19309)

Signed-off-by: pramkuma <Pramendra.Kumar@amd.com>

* [BugFix][V1] Fix memory profiling bug (vllm-project#18974)

Signed-off-by: luka <luka@neuralmagic.com>

* [Bugfix]: Fix TypeError: 'float' object cannot be interpreted as an integer (vllm-project#19283)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Bugfix] Re-enable use_cudagraph in vLLM v1 (vllm-project#19299)

Signed-off-by: Richard Zou <zou3519@gmail.com>

* [Misc] Change tests/compile to use VLLM_V1 by default (vllm-project#19302)

Signed-off-by: rzou <zou3519@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B (vllm-project#19315)

Signed-off-by: Xu Wenqing <xuwq1993@qq.com>

* [Hardware][POWER] Add IBM POWER11 Support to CPU Extension Detection (vllm-project#19082)

Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>

* [Quantization] Add compressed-tensors NVFP4 support (vllm-project#18312)

* [Multi Modal] Add an env var for message queue max chunk bytes  (vllm-project#19242)

Signed-off-by: yZhen <yZhen@fb.com>
Co-authored-by: yZhen <yZhen@fb.com>

* [Bugfix] model_max_length should consider max_model_len in tokenizer_config (vllm-project#19201)

* [Deprecation] Remove `inputs` arg fallback in Engine classes (vllm-project#18799)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Add documentation update reminder to PR template (vllm-project#19289)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Frontend] Remove unreachable code from llm.py (vllm-project#19288)

Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com>

* [Misc] Cleanup compilation tests (vllm-project#19343)

Signed-off-by: rzou <zou3519@gmail.com>

* [doc] improve ci doc (vllm-project#19307)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Doc] Fix description in the Automatic Prefix Caching design doc (vllm-project#19333)

Signed-off-by: cr7258 <chengzw258@163.com>

* [CI/Build] Fix LoRA test (vllm-project#19350)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Fix] Allow kernel compilation for CUDA capability 8.7 (vllm-project#19328)

Signed-off-by: Conroy Cheers <conroy@corncheese.org>

* [CI] Introduce rules for llama auto-label (vllm-project#19323)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Docs] Fix a bullet list in usage/security.md (vllm-project#19358)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* [full_graph] Fix query_start_loc padding (vllm-project#19321)

Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>

* [v1] Add fp32 support to v1 engine through flex attn (vllm-project#19319)

Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Misc] Fixes and Optimizations for DeepEP + DeepGEMM combination. (vllm-project#19298)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun <vsundarr@redhat.com>

* [Bugfix][Core] Prevent token lengths exceeding `max_model_len` in V0 (vllm-project#19348)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Quantization] Bump compressed-tensors version (vllm-project#19295)

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

* [Frontend] Make TIMEOUT_KEEP_ALIVE configurable through env var (vllm-project#18472)

Signed-off-by: liusiqian <liusiqian@tal.com>

* [TPU]Fix KV cache sharing tests (vllm-project#19371)

* [HOT-FIX] Add `kv_sharing_target_layer_name` argument to cutlass_mla backend (vllm-project#19374)

Signed-off-by: Pavani Majety <pmajety@nvidia.com>

* [Misc] Fix a config typo in disable_hybrid_kv_cache_manager configuration (vllm-project#19383)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [V1] Reuse V0's memory_profiling util for gpu worker memory profiling (vllm-project#19312)

Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>

* [Bugfix] Fix benchmark_moe.py (vllm-project#19016)

Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>

* Use xla flag to improve the quantized model performance (vllm-project#19303)

Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>

* Fix docs/mkdocs/hooks/remove_announcement.py (vllm-project#19382)

* [Frontend] Add tqdm_leave_pbar to control progress bar visibility (vllm-project#19357)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Core] Use tuple for kv cache group block ids (vllm-project#19175)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix modelscope token passed in (vllm-project#19389)

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>

* [Core] Batch multi modal input using pinned memory (vllm-project#19169)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* Add security warning to bug report template (vllm-project#19365)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Misc] refactor neuron_multimodal and profiling (vllm-project#19397)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Add clear documentation around the impact of debugging flag (vllm-project#19369)

Signed-off-by: Anna Pendleton <pendleton@google.com>

* Automatically bind CPU OMP Threads of a rank to CPU ids of a NUMA node. (vllm-project#17930)

Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Co-authored-by: Li, Jiang <bigpyj64@gmail.com>

* Revert "[v1] Add fp32 support to v1 engine through flex attn" (vllm-project#19404)

* [BugFix][FlashInfer] Fix attention backend interface mismatch with unexpected keyword `use_irope` (vllm-project#19134)

Signed-off-by: Yunqiu Guo <guorachel@meta.com>

* [BugFix][CPU] Fix CPU CI by ignore collecting test_pixtral (vllm-project#19411)

Signed-off-by: jiang.li <jiang1.li@intel.com>

* Simplify ep kernels installation (vllm-project#19412)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Slight improvement of the BNB  (vllm-project#19418)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* fix

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

* fix

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

* update config

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

* add

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

---------

Signed-off-by: Hongxia Yang <hongxia.yang@amd.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: rzou <zou3519@gmail.com>
Signed-off-by: Chengji Yao <chengjiyao@google.com>
Signed-off-by: Elaine Zhao <elaineyz@amazon.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Signed-off-by: Brent Salisbury <bsalisbu@redhat.com>
Signed-off-by: Satyajith Chilappagari <satchill@amazon.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
Signed-off-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>
Signed-off-by: luka <luka@neuralmagic.com>
Signed-off-by: Chenyaaang <chenyangli@google.com>
Signed-off-by: Duyi-Wang <duyi.wang@intel.com>
Signed-off-by: Zerohertz <ohg3417@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: googs1025 <googs1025@gmail.com>
Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Will Eaton <weaton@redhat.com>
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Carol Zheng <cazheng@google.com>
Signed-off-by: wenhuach21 <wenhua.cheng@intel.com>
Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
Signed-off-by: iLeGend <824040212@qq.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Signed-off-by: rabi <ramishra@redhat.com>
Signed-off-by: Daniele Trifirò <dtrifiro@redhat.com>
Signed-off-by: huangyuxiang03 <huangyx0321@gmail.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: isotr0py <2037008807@qq.com>
Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>
Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: Lu Fang <fanglu@fb.com>
Signed-off-by: Fred Reiss <frreiss@us.ibm.com>
Signed-off-by: charlifu <charlifu@amd.com>
Signed-off-by: Piotr Tarasiewicz <ptarasiewicz@nvidia.com>
Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Signed-off-by: yzhen <yzhen@devgpu093.cco2.facebook.com>
Signed-off-by: François Paupier <francois.paupier@gmail.com>
Signed-off-by: calvin chen <120380290@qq.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Signed-off-by: jiang.li <jiang1.li@intel.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Signed-off-by: Lu Fang <lufang@fb.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: mgoin <michael@neuralmagic.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: raushan <raushan@huggingface.co>
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Varun <vsundarr@redhat.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>
Signed-off-by: Jon Swenson <jmswen@gmail.com>
Signed-off-by: Yang Wang <elainewy@meta.com>
Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>
Signed-off-by: Chiyue Wei <chiyuew@nvidia.com>
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Signed-off-by: Jerry Zhang <jerryzh168@gmail.com>
Signed-off-by: Xu Song <xusong.vip@gmail.com>
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com>
Signed-off-by: Siqi Yan <siqi@meta.com>
Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: Qiliang Cui <derrhein@gmail.com>
Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>
Signed-off-by: drisspg <drisspguessous@gmail.com>
Signed-off-by: Lifan Shen <lifans@meta.com>
Signed-off-by: pramkuma <Pramendra.Kumar@amd.com>
Signed-off-by: Richard Zou <zou3519@gmail.com>
Signed-off-by: Xu Wenqing <xuwq1993@qq.com>
Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Signed-off-by: yZhen <yZhen@fb.com>
Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com>
Signed-off-by: cr7258 <chengzw258@163.com>
Signed-off-by: Conroy Cheers <conroy@corncheese.org>
Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: liusiqian <liusiqian@tal.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>
Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>
Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Anna Pendleton <pendleton@google.com>
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Signed-off-by: Yunqiu Guo <guorachel@meta.com>
Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>
Co-authored-by: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Richard Zou <zou3519@users.noreply.github.com>
Co-authored-by: Chengji Yao <chengjiyao@google.com>
Co-authored-by: aws-elaineyz <elaineyz@amazon.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Brent Salisbury <bsalisbu@redhat.com>
Co-authored-by: Satyajith Chilappagari <satchill@amazon.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Michael Yao <haifeng.yao@daocloud.io>
Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: Lukas Geiger <lukas.geiger94@gmail.com>
Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com>
Co-authored-by: Chenyaaang <42742451+Chenyaaang@users.noreply.github.com>
Co-authored-by: Duyi-Wang <duyi.wang@intel.com>
Co-authored-by: Hyogeun Oh (오효근) <ohg3417@gmail.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: CYJiang <86391540+googs1025@users.noreply.github.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Will Eaton <weaton@redhat.com>
Co-authored-by: Will Eaton <wseaton@users.noreply.github.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Carol Zheng <cazheng@google.com>
Co-authored-by: Wenhua Cheng <wenhua.cheng@intel.com>
Co-authored-by: Chauncey <chaunceyjiang@gmail.com>
Co-authored-by: iLeGend <youzhi.jin@intel.com>
Co-authored-by: H <linhaibin.eric@gmail.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: Rabi Mishra <ramishra@redhat.com>
Co-authored-by: Always-Naive <97138029+Always-Naive@users.noreply.github.com>
Co-authored-by: Daniele <36171005+dtrifiro@users.noreply.github.com>
Co-authored-by: Shawn Huang <57223022+huangyuxiang03@users.noreply.github.com>
Co-authored-by: huangyuxiang03 <huangyx0321@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: rongfu.leng <rongfu.leng@daocloud.io>
Co-authored-by: Yu Guo <82124926+yuguo68@users.noreply.github.com>
Co-authored-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Co-authored-by: Yong Hoon Shin <48474650+sarckk@users.noreply.github.com>
Co-authored-by: Lucia Fang <116399278+luccafong@users.noreply.github.com>
Co-authored-by: Lucia (Lu) Fang <fanglu@meta.com>
Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>
Co-authored-by: Rohith Nallamaddi <nalrohit@amazon.com>
Co-authored-by: FeliciaLuo <luof@amazon.com>
Co-authored-by: Fred Reiss <frreiss@us.ibm.com>
Co-authored-by: Charlie Fu <charlifu@amd.com>
Co-authored-by: ptarasiewiczNV <104908264+ptarasiewiczNV@users.noreply.github.com>
Co-authored-by: Yizhou Liu <liu_yizhou@outlook.com>
Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Co-authored-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Co-authored-by: zhrrr <43847754+izhuhaoran@users.noreply.github.com>
Co-authored-by: zhuhaoran <zhuhaoran.zhr@alibaba-inc.com>
Co-authored-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>
Co-authored-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Co-authored-by: jennyyyyzhen <47012288+jennyyyyzhen@users.noreply.github.com>
Co-authored-by: yZhen <yZhen@fb.com>
Co-authored-by: yzhen <yzhen@devgpu093.cco2.facebook.com>
Co-authored-by: Frαnçois <francois.paupier@gmail.com>
Co-authored-by: Calvin Chen <45745657+calvin0327@users.noreply.github.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Hossein Sarshar <hossein.sarshar@gmail.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Concurrensee <yidawu@alumni.cmu.edu>
Co-authored-by: Li, Jiang <jiang1.li@intel.com>
Co-authored-by: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: SorenDreano <71752785+SorenDreano@users.noreply.github.com>
Co-authored-by: Soren Dreano <soren@numind.ai>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Yikun Jiang <yikun@apache.org>
Co-authored-by: Yan Ru Pei <yanrpei@gmail.com>
Co-authored-by: Jiaxin Shan <seedjeffwan@gmail.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Vadim Gimpelson <156319763+vadiklyutiy@users.noreply.github.com>
Co-authored-by: Kaixi Hou <kaixih@nvidia.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: Seiji Eicher <58963096+eicherseiji@users.noreply.github.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Co-authored-by: Xu Wenqing <121550081+Xu-Wenqing@users.noreply.github.com>
Co-authored-by: Lain <fusiyuan2000@hotmail.com>
Co-authored-by: jmswen <jmswen@users.noreply.github.com>
Co-authored-by: Kebe <mail@kebe7jun.com>
Co-authored-by: Yang Wang <elainewy@meta.com>
Co-authored-by: Huy Do <huydhn@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Chiyue Wei <92623189+dubcyfor3@users.noreply.github.com>
Co-authored-by: Chiyue Wei <chiyuew@nvidia.com>
Co-authored-by: Povilas Kanapickas <povilas@radix.lt>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: Luis Vega <vegaluisjose@users.noreply.github.com>
Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Co-authored-by: Jerry Zhang <jerryzh168@gmail.com>
Co-authored-by: Xu Song <xusong.vip@gmail.com>
Co-authored-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: Jinghui Zhang <jinghuizhang0804@gmail.com>
Co-authored-by: jinghui <jinghui@fb.com>
Co-authored-by: Siqi Yan <ysq0807@hotmail.com>
Co-authored-by: Siqi Yan <siqi@meta.com>
Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Co-authored-by: Adolfo Victoria <adolfokarim@gmail.com>
Co-authored-by: Adolfo Victoria <adovi@meta.com>
Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
Co-authored-by: ElizaWszola <ewszola@redhat.com>
Co-authored-by: QiliangCui <derrhein@gmail.com>
Co-authored-by: Aaruni Aggarwal <47731267+AaruniAggarwal@users.noreply.github.com>
Co-authored-by: Driss Guessous <32754868+drisspg@users.noreply.github.com>
Co-authored-by: Lifans <draftbks@gmail.com>
Co-authored-by: pramenku <7664080+pramenku@users.noreply.github.com>
Co-authored-by: Akash kaothalkar <61960177+Akashcodes732@users.noreply.github.com>
Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Co-authored-by: Kseniya Parkhamchuk <43078183+KsuParkhamchuk@users.noreply.github.com>
Co-authored-by: Se7en <chengzw258@163.com>
Co-authored-by: Conroy Cheers <conroy@corncheese.org>
Co-authored-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: liusiqian-tal <141730978+liusiqian-tal@users.noreply.github.com>
Co-authored-by: Pavani Majety <pmajety@nvidia.com>
Co-authored-by: Ye (Charlotte) Qi <yeq@meta.com>
Co-authored-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>
Co-authored-by: XiongfeiWei <isaacwxf23@gmail.com>
Co-authored-by: Li Wang <wangli858794774@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Anna Pendleton <pendleton@google.com>
Co-authored-by: Louie Tsai <louie.tsai@intel.com>
Co-authored-by: Li, Jiang <bigpyj64@gmail.com>
Co-authored-by: Rachel Guo <35738743+YUNQIUGUO@users.noreply.github.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
0