[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Llama 3.3 70B Support #2867

Closed
3 tasks done
vladrad opened this issue Dec 8, 2024 · 2 comments
Closed
3 tasks done

[Bug] Llama 3.3 70B Support #2867

vladrad opened this issue Dec 8, 2024 · 2 comments
Assignees

Comments

@vladrad
Copy link
vladrad commented Dec 8, 2024

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

Hello I am currently working on a project where I use Llama 3.1. This week Meta released Llama 3.3 70B. I tried to run with the latest version of lmdeploy from docker. I also tried to inherit the container and do an upgrade on transformers.

I think there is a change within the llama tokenizer that I can't seem to pinpoint.
| TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]

Reproduction

docker run --runtime nvidia --gpus '"device=0"' \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HUGGING_FACE_HUB_TOKEN=" \
    -p 23333:23333 --ipc=host llmdeploy:latest \
    lmdeploy serve api_server casperhansen/llama-3.3-70b-instruct-awq \
    --backend turbomind --model-format awq --model-name Llama3.1-70B-AWQ 

Environment

Using latest and tried to do a custom docker file 

from openmmlab/lmdeploy:latest
RUN pip install flash-attn
RUN pip install transformers==4.45.0 #tried 4.46 and 4.47 all have the same issue

Error traceback

+-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "/opt/py3/lib/python3.10/site-packages/starlette/responses.py", line 261, in wrap
    |     await func()
    |   File "/opt/py3/lib/python3.10/site-packages/starlette/responses.py", line 250, in stream_response
    |     async for chunk in self.body_iterator:
    |   File "/opt/lmdeploy/lmdeploy/serve/openai/api_server.py", line 422, in completion_stream_generator
    |     async for res in result_generator:
    |   File "/opt/lmdeploy/lmdeploy/serve/async_engine.py", line 525, in generate
    |     prompt_input = await self._get_prompt_input(prompt,
    |   File "/opt/lmdeploy/lmdeploy/serve/async_engine.py", line 474, in _get_prompt_input
    |     input_ids = self.tokenizer.encode(prompt, add_bos=sequence_start)
    |   File "/opt/lmdeploy/lmdeploy/tokenizer.py", line 600, in encode
    |     return self.model.encode(s, add_bos, add_special_tokens, **kwargs)
    |   File "/opt/lmdeploy/lmdeploy/tokenizer.py", line 366, in encode
    |     encoded = self.model.encode(s,
    |   File "/opt/py3/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2791, in encode
    |     encoded_inputs = self.encode_plus(
    |   File "/opt/py3/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3210, in encode_plus
    |     return self._encode_plus(
    |   File "/opt/py3/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 603, in _encode_plus
    |     batched_output = self._batch_encode_plus(
    |   File "/opt/py3/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 529, in _batch_encode_plus
    |     encodings = self._tokenizer.encode_batch(
    | TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
    +------------------------------------
@AllentDan
Copy link
Collaborator

I did not reproduce the error with Llama-3.3-70B-Instruct and latest lmdeploy. Transformers version 4.46.2.

How about you try add an argument --chat-template llama3_1

@vladrad
Copy link
Author
vladrad commented Dec 9, 2024

Yes! thank you so much!!!!!

@lvhan028 lvhan028 closed this as completed Dec 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants