8000 [Cherry-pick] Fix safetensors shape by DesmonDay · Pull Request #8703 · PaddlePaddle/PaddleNLP · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[Cherry-pick] Fix safetensors shape #8703

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 27 commits into from

Conversation

DesmonDay
Copy link
Contributor

PR types

Bug fixes

PR changes

Others

Description

Cherry-pick #8702

lugimzzz and others added 27 commits April 12, 2024 13:39
* Upgrade paddlenlp to 2.8.0

* fix try import

* Add regex to requirements.txt
This reverts commit 7314063.
10000
* support llama-3

* Add llama-3 tokenizer

* fix for llama3
…ddlePaddle#8303)

* [Distributed] adapt sequence parallel on LoRA (PaddlePaddle#8235)

* [Distributed] [CustomDevices] adapt lora sp && polish MC2 APIs
* [DistDataloader] Update implementation, add nested.py (PaddlePaddle#8380)
* fix distdataloader, fix eval with dp group (PaddlePaddle#8420)
* [Performance] Optimize unified checkpoint save/load speed. (PaddlePaddle#8204)

* opt unified checkpoint save/load speed.
* [XPU] llama add xpu support (PaddlePaddle#8282)

* [XPU] llama add xpu support

* fix

* use try import

* fix

* refine

* refine

* refine

* refine

* update (PaddlePaddle#8399)

* [LLM] Support fuse attention q, k, v weights  (PaddlePaddle#8202)

1. add use-interface & fuse action

1.1. modify 1., code order

2. switch to name_mapping

3. solve tp branch

3.2 follow hui, handel qkv separately

3.3 handle pdparams

3.4 from torch

3.5 abandon low_cpu_mem_usage

3.6 solve shard branch

* 3.6.1 solve shard branch after rebase develop

* code clean

* remove debug comment

* Redefine fuse and split functions

* Redefine fuse and split functions

8000

* comment and fix

* update method

* update QKV fuse and split

* support fuse weights in multi-files

* add precision compare

* simplify function call

* support use_fast_ffn

* clean modeling and configuration

* add test for gpt and opt

* fix tp_actions get

* add fast_ffn test

* add Qwen2Moe

* Revert "add Qwen2Moe"

This reverts commit 113b883.

* add test for split

* update doc

* update filter_dict_keys

---------

Co-authored-by: Zii <ziangqin.baidu@gmail.com>

* [LLM] Fix fuse or split with same key (PaddlePaddle#8378)

* fix fuse or split with same key

* fix

* fix eps

* update format

* [LLM] add decay steps option for finetuning (PaddlePaddle#8251)

* [LLM] add memory stats to logger of trainer (PaddlePaddle#8269)

* [Distributed] fix lora (PaddlePaddle#8325)

* [LLM] fix lora target modules on llama (PaddlePaddle#8372)

* [Distributed] metric calculation supports tp logits (PaddlePaddle#8370)

* Update model_utils.py

* Update model_utils.py

* Update model_utils.py

---------

Co-authored-by: Jianbang Yang <yangjianbang112@gmail.com>
Co-authored-by: DrownFish19 <DrownFish19@gmail.com>
Co-authored-by: Zii <ziangqin.baidu@gmail.com>
Co-authored-by: Tian <121000916+SylarTiaNII@users.noreply.github.com>
* [fea] moe support (PaddlePaddle#8498)

Co-authored-by: kebo01 <kebo01@baidu.com>

* [fix] Broadcast optimizer state using broadcast_dp without shard-reshard. (PaddlePaddle#8522)
…#8419) (PaddlePaddle#8533)

Co-authored-by: Tian <121000916+SylarTiaNII@users.noreply.github.com>
* [Safetensors] Fix fast safe open slice. (PaddlePaddle#8512)
* [FIX DDP] fix ddp (PaddlePaddle#8549)
* Update sequence_parallel for predict

* Do not save moe_group

* Fix safetensors reading
Copy link
paddle-bot bot commented Jul 3, 2024

Thanks for your contribution!

@DesmonDay DesmonDay closed this Jul 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0