8000 [LLM]: fix block_size setting for llama. by zhaohaixu · Pull Request #9921 · PaddlePaddle/PaddleNLP · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[LLM]: fix block_size setting for llama. #9921

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 21, 2025

Conversation

zhaohaixu
Copy link
Contributor

PR types

Bug fixes.

PR changes

Models.

Description

Enable llama to run with block_size according to the specified by the command line, because some custom devices kernel do not support default block_size (64), like on sdaa.

Copy link
paddle-bot bot commented Feb 21, 2025

Thanks for your contribution!

Copy link
codecov bot commented Feb 21, 2025

Codecov Report

Attention: Patch coverage is 0% with 1 line in your changes missing coverage. Please review.

Project coverage is 51.34%. Comparing base (b8ebe3e) to head (22ce49c).
Report is 330 commits behind head on develop.

Files with missing lines Patch % Lines
...dlenlp/experimental/transformers/llama/modeling.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff            @@
##           develop    #9921   +/-   ##
========================================
  Coverage    51.33%   51.34%           
========================================
  Files          745      745           
  Lines       118592   118593    +1     
========================================
+ Hits         60884    60886    +2     
+ Misses       57708    57707    -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

82C7 @ZHUI ZHUI merged commit 0c65ce7 into PaddlePaddle:develop Feb 21, 2025
10 of 12 checks passed
@zhaohaixu zhaohaixu deleted the llm_dev branch March 11, 2025 07:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0