8000 [Feature] Support float8 dtype storage and deepseek v3 with fp8 inference. by ZHUI · Pull Request #9906 · PaddlePaddle/PaddleNLP · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[Feature] Support float8 dtype storage and deepseek v3 with fp8 inference. #9906

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 18 commits into from
Mar 3, 2025

Conversation

ZHUI
Copy link
Collaborator
@ZHUI ZHUI commented Feb 19, 2025

Before submitting

  • Lint code. If there are lint issues, please format the code first.
# Install and register `pre-commit` in the project folder
pip install pre-commit && pre-commit install

# Process previous code files separately
pre-commit run --file XXXX.py
  • Add test cases into tests folder. If there are codecov issues, please add tests cases first.

PR types

New features

PR changes

Others

Description

Support float8 dtype storage.

FP8的模型有: deepseek-ai/DeepSeek-V3-FP8, deepseek-ai/DeepSeek-R1-FP8

For FP8,

import paddle
paddle.set_default_dtype("bfloat16")
from paddlenlp.transformers import AutoModelForCausalLM, AutoConfig

path = "deepseek-ai/DeepSeek-V3-FP8"
config = AutoConfig.from_pretrained(path)
config.num_hidden_layers = 4
config.dtype = paddle.bfloat16
config.use_fp8 = True
model =  AutoModelForCausalLM.from_pretrained(path, config=config)
model.eval()
ret = model(input_ids= paddle.to_tensor([[10,11,12,13,14,15]], dtype=paddle.int64), return_dict=True)
print(ret)
# Cast bfloat16 result
# CausalLMOutputWithPast(loss=None, logits=Tensor(shape=[1, 5, 129280], dtype=bfloat16, place=Place(gpu:0), stop_gradient=False,
#        [[[ 4.40625000, -2.35937500, -0.38476562, ..., -0.24511719,
#           -0.08007812, -0.36523438],
#          [ 6.78125000, -0.89453125,  0.35937500, ...,  0.34570312,
#            0.52343750,  0.54296875],
#          [ 7.46875000,  1.75000000,  0.18457031, ...,  0.07324219,
#            0.32812500,  0.16406250],
#          [ 5.06250000, -3.82812500,  0.49609375, ...,  0.35546875,
#            0.39257812,  0.31445312],
#          [-0.67578125, -2.10937500,  0.05322266, ...,  0.40625000,
#            0.24023438,  0.01519775]]]), past_key_values=None, hidden_states=None, attentions=None)

# Pure FP8 kernel result

For BFLOAT16

import paddle
paddle.set_default_dtype("bfloat16")
from paddlenlp.transformers import AutoModelForCausalLM, AutoConfig

path = "deepseek-ai/DeepSeek-V3"
config = AutoConfig.from_pretrained(path)
config.num_hidden_layers = 4
config.dtype = paddle.bfloat16
model =  AutoModelForCausalLM.from_pretrained(path, config=config)
model.eval()
ret = model(input_ids= paddle.to_tensor([[10,11,12,13,14,15]], dtype=paddle.int64), return_dict=True)
print(ret)
# CausalLMOutputWithPast(loss=None, logits=Tensor(shape=[1, 5, 129280], dtype=bfloat16, place=Place(gpu:0), stop_gradient=False,
#        [[[ 4.40625000, -2.35937500, -0.38476562, ..., -0.24511719,
#           -0.08007812, -0.36523438],
#          [ 6.78125000, -0.89453125,  0.35937500, ...,  0.34570312,
#            0.52343750,  0.54296875],
#          [ 7.46875000,  1.75000000,  0.18457031, ...,  0.07324219,
#            0.32812500,  0.16406250],
#          [ 5.06250000, -3.82812500,  0.49609375, ...,  0.35546875,
#            0.39257812,  0.31445312],
#          [-0.67578125, -2.10937500,  0.05322266, ...,  0.40625000,
#            0.24023438,  0.01519775]]]), past_key_values=None, hidden_states=None, attentions=None)

Copy link
paddle-bot bot commented Feb 19, 2025

Thanks for your contribution!

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Copy link
codecov bot commented Feb 19, 2025

Codecov Report

Attention: Patch coverage is 42.01183% with 196 lines in your changes missing coverage. Please review.

Project coverage is 51.07%. Comparing base (40c3530) to head (be36ba2).
Report is 337 commits behind head on develop.

Files with missing lines Patch % Lines
paddlenlp/transformers/deepseek_v2/kernel.py 15.47% 71 Missing ⚠️
paddlenlp/transformers/deepseek_v2/modeling.py 10.52% 51 Missing ⚠️
paddlenlp/transformers/deepseek_v2/fp8_linear.py 47.54% 32 Missing ⚠️
paddlenlp/mergekit/merge_model.py 15.00% 17 Missing ⚠️
paddlenlp/utils/paddle_patch.py 81.25% 15 Missing ⚠️
paddlenlp/transformers/model_utils.py 65.00% 7 Missing ⚠️
paddlenlp/transformers/moe_gate.py 60.00% 2 Missing ⚠️
...addlenlp/transformers/deepseek_v2/configuration.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #9906      +/-   ##
===========================================
- Coverage    51.08%   51.07%   -0.01%     
===========================================
  Files          745      748       +3     
  Lines       119274   119536     +262     
===========================================
+ Hits         60927    61055     +128     
- Misses       58347    58481     +134     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

return paddle.to_tensor(tensor)


class EextendDtypeNumpySafe(unittest.TestCase):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extend

@ZHUI ZHUI changed the title [Feature] Support float8 dtype storage. [Feature] Support float8 dtype storage and deepseek v3 with fp18 inference. Feb 27, 2025
@ZHUI ZHUI changed the title [Feature] Support float8 dtype storage and deepseek v3 with fp18 inference. [Feature] Support float8 dtype storage and deepseek v3 with fp8 inference. Feb 27, 2025
is_bf16 = str(tensor.dtype) in ["uint16", "bfloat16"]
tensor = paddle.Tensor.__call__(tensor, zero_copy=True)
lora_A_tensor = paddle.Tensor.__call__(lora_A_tensor, zero_copy=True)
lora_B_tensor = paddle.Tensor.__call__(lora_B_tensor, zero_copy=True)
if self.is_cpu and is_bf16:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里替换__call__函数的原因是什么?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paddle.Tensor 不支持 初始化 FP8 tensor,临时切换接口支持。

@@ -0,0 +1,226 @@
# Copyright (c) 2025 PaddlePaddle Authors. All Rights Reserved.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的CopyRight是不是要加上deepseek

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以改成 kernel.py -> fp8_kernel.py

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

下个pr修改

from .configuration import DeepseekV2Config
from .fp8_linear import Linear
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里可以直接import吗? 看起来是把Linear全替换了

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是的,全替换了。这个运行时需要全替换。可以后续 加载DeepSeek 模型的时候才替换。

@@ -628,36 +635,43 @@ def __init__(self, config: DeepseekV2Config, hidden_size=None, intermediate_size
self.hidden_size = config.hidden_size if hidden_size is None else hidden_size
self.intermediate_size = config.intermediate_size if intermediate_size is None else intermediate_size

def linear_dtype_gaurd():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fp8参数的加载已经在在from_pretrained接口适配了?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是的,直接初始化成加载FP8参数。

Copy link
Collaborator
@wawltor wawltor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wawltor wawltor merged commit 10dd453 into PaddlePaddle:develop Mar 3, 2025
8 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0