8000 Bump vllm from 0.7.2 to 0.8.0 in /api-server/qna-eval in the pip group across 1 directory by dependabot[bot] · Pull Request #679 · instructlab/ui · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Bump vllm from 0.7.2 to 0.8.0 in /api-server/qna-eval in the pip group across 1 directory #679

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 20, 2025

Conversation

dependabot[bot]
Copy link
Contributor
@dependabot dependabot bot commented on behalf of github Mar 19, 2025

Bumps the pip group with 1 update in the /api-server/qna-eval directory: vllm.

Updates vllm from 0.7.2 to 0.8.0

Release notes

Sourced from vllm's releases.

v0.8.0 featured 523 commits from 166 total contributors (68 new contributors)!

Highlights

V1

We have now enabled V1 engine by default (#13726) for supported use cases. Please refer to V1 user guide for more detail. We expect better performance for supported scenarios. If you'd like to disable V1 mode, please specify the environment variable VLLM_USE_V1=0, and send us a GitHub issue sharing the reason!

DeepSeek Improvements

We observe state of the art performance with vLLM running DeepSeek model on latest version of vLLM:

  • MLA Enhancements:
  • Distributed Expert Parallelism (EP) and Data Parallelism (DP)
    • EP Support for DeepSeek Models (#12583)
    • Add enable_expert_parallel arg (#14305)
    • EP/TP MoE + DP Attention (#13931)
    • Set up data parallel communication (#13591)
  • MTP: Expand DeepSeek MTP code to support k > n_predict (#13626)
  • Pipeline Parallelism:
    • DeepSeek V2/V3/R1 only place lm_head on last pp rank (#13833)
    • Improve pipeline partitioning (#13839)
  • GEMM
    • Add streamK for block-quantized CUTLASS kernels (#12978)
    • Add benchmark for DeepGEMM and vLLM Block FP8 Dense GEMM (#13917)
    • Add more tuned configs for H20 and others (#14877)

New Models

  • Gemma 3 (#14660)
    • Note: You have to install transformers from main branch (pip install git+https://github.com/huggingface/transformers.git) to use this model. Also, there may be numerical instabilities for float16/half dtype. Please use bfloat16 (preferred by HF) or float32 dtype.
  • Mistral Small 3.1 (#14957)
  • Phi-4-multimodal-instruct (#14119)
  • Grok1 (#13795)
  • QwQ-32B and toll calling (#14479, #14478)
  • Zamba2 (#13185)

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore <dependency name> major version will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
  • @dependabot ignore <dependency name> minor version will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
  • @dependabot ignore <dependency name> will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
  • @dependabot unignore <dependency name> will remove all of the ignore conditions of the specified dependency
  • @dependabot unignore <dependency name> <ignore condition> will remove the ignore condition of the specified dependency and ignore conditions
    You can disable automated security fix PRs for this repo from the Security Alerts page.

Bumps the pip group with 1 update in the /api-server/qna-eval directory: [vllm](https://github.com/vllm-project/vllm).


Updates `vllm` from 0.7.2 to 0.8.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md)
- [Commits](vllm-project/vllm@v0.7.2...v0.8.0)

---
updated-dependencies:
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Mar 19, 2025
@vishnoianil vishnoianil merged commit 4582dbc into main Mar 20, 2025
10 checks passed
@vishnoianil vishnoianil deleted the dependabot/pip/api-server/qna-eval/pip-a54d0846f2 branch March 20, 2025 17:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that up 4933 date a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant
0