-
Notifications
You must be signed in to change notification settings - Fork 3k
[Cherry-pick] Fix safetensors shape #8703
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Upgrade paddlenlp to 2.8.0 * fix try import * Add regex to requirements.txt
…lePaddle#8274) * try except sp * fix sp import
This reverts commit 7314063.
10000
* support llama-3 * Add llama-3 tokenizer * fix for llama3
…ddlePaddle#8303) * [Distributed] adapt sequence parallel on LoRA (PaddlePaddle#8235) * [Distributed] [CustomDevices] adapt lora sp && polish MC2 APIs
Remove truncate
* [DistDataloader] Update implementation, add nested.py (PaddlePaddle#8380) * fix distdataloader, fix eval with dp group (PaddlePaddle#8420)
* [Performance] Optimize unified checkpoint save/load speed. (PaddlePaddle#8204) * opt unified checkpoint save/load speed.
* [XPU] llama add xpu support (PaddlePaddle#8282) * [XPU] llama add xpu support * fix * use try import * fix * refine * refine * refine * refine * update (PaddlePaddle#8399) * [LLM] Support fuse attention q, k, v weights (PaddlePaddle#8202) 1. add use-interface & fuse action 1.1. modify 1., code order 2. switch to name_mapping 3. solve tp branch 3.2 follow hui, handel qkv separately 3.3 handle pdparams 3.4 from torch 3.5 abandon low_cpu_mem_usage 3.6 solve shard branch * 3.6.1 solve shard branch after rebase develop * code clean * remove debug comment * Redefine fuse and split functions * Redefine fuse and split functions 8000 * comment and fix * update method * update QKV fuse and split * support fuse weights in multi-files * add precision compare * simplify function call * support use_fast_ffn * clean modeling and configuration * add test for gpt and opt * fix tp_actions get * add fast_ffn test * add Qwen2Moe * Revert "add Qwen2Moe" This reverts commit 113b883. * add test for split * update doc * update filter_dict_keys --------- Co-authored-by: Zii <ziangqin.baidu@gmail.com> * [LLM] Fix fuse or split with same key (PaddlePaddle#8378) * fix fuse or split with same key * fix * fix eps * update format * [LLM] add decay steps option for finetuning (PaddlePaddle#8251) * [LLM] add memory stats to logger of trainer (PaddlePaddle#8269) * [Distributed] fix lora (PaddlePaddle#8325) * [LLM] fix lora target modules on llama (PaddlePaddle#8372) * [Distributed] metric calculation supports tp logits (PaddlePaddle#8370) * Update model_utils.py * Update model_utils.py * Update model_utils.py --------- Co-authored-by: Jianbang Yang <yangjianbang112@gmail.com> Co-authored-by: DrownFish19 <DrownFish19@gmail.com> Co-authored-by: Zii <ziangqin.baidu@gmail.com> Co-authored-by: Tian <121000916+SylarTiaNII@users.noreply.github.com>
* [fea] moe support (PaddlePaddle#8498) Co-authored-by: kebo01 <kebo01@baidu.com> * [fix] Broadcast optimizer state using broadcast_dp without shard-reshard. (PaddlePaddle#8522)
…#8419) (PaddlePaddle#8533) Co-authored-by: Tian <121000916+SylarTiaNII@users.noreply.github.com>
* [Safetensors] Fix fast safe open slice. (PaddlePaddle#8512) * [FIX DDP] fix ddp (PaddlePaddle#8549)
* Update sequence_parallel for predict * Do not save moe_group * Fix safetensors reading
Thanks for your contribution! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR types
Bug fixes
PR changes
Others
Description
Cherry-pick #8702