8000 [Auto-Parallel] optimize llama27b to avoid unnecessary communication by liym27 · Pull Request #10671 · PaddlePaddle/PaddleNLP · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[Auto-Parallel] optimize llama27b to avoid unnecessary communication #10671

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 30, 2025

Conversation

liym27
Copy link
Contributor
@liym27 liym27 commented May 29, 2025

Before submitting

  • Lint code. If there are lint issues, please format the code first.
# Install and register `pre-commit` in the project folder
pip install pre-commit && pre-commit install

# Process previous code files separately
pre-commit run --file XXXX.py
  • Add test cases into tests folder. If there are codecov issues, please add tests cases first.

PR types

Performance optimization

PR changes

Models

Description

optimize llama27b to avoid unnecessary communication

Copy link
paddle-bot bot commented May 29, 2025

Thanks for your contribution!

@liym27 liym27 changed the title [Auto-Parallel] optimize llama-7b benchmark in temporary solution to … [Auto-Parallel] optimize llama27b to avoid unnecessary communication May 30, 2025
@ZHUI ZHUI merged commit e0921f0 into PaddlePaddle:develop May 30, 2025
6 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0