You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
Describe the bug
Hello I am currently working on a project where I use Llama 3.1. This week Meta released Llama 3.3 70B. I tried to run with the latest version of lmdeploy from docker. I also tried to inherit the container and do an upgrade on transformers.
I think there is a change within the llama tokenizer that I can't seem to pinpoint.
| TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
Using latest and tried to do a custom docker file
from openmmlab/lmdeploy:latest
RUN pip install flash-attn
RUN pip install transformers==4.45.0 #tried 4.46 and 4.47 all have the same issue
Error traceback
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/opt/py3/lib/python3.10/site-packages/starlette/responses.py", line 261, in wrap
| await func()
| File "/opt/py3/lib/python3.10/site-packages/starlette/responses.py", line 250, in stream_response
| async forchunkin self.body_iterator:
| File "/opt/lmdeploy/lmdeploy/serve/openai/api_server.py", line 422, in completion_stream_generator
| async forresin result_generator:
| File "/opt/lmdeploy/lmdeploy/serve/async_engine.py", line 525, in generate
| prompt_input = await self._get_prompt_input(prompt,
| File "/opt/lmdeploy/lmdeploy/serve/async_engine.py", line 474, in _get_prompt_input
| input_ids = self.tokenizer.encode(prompt, add_bos=sequence_start)
| File "/opt/lmdeploy/lmdeploy/tokenizer.py", line 600, in encode
|return self.model.encode(s, add_bos, add_special_tokens, **kwargs)
| File "/opt/lmdeploy/lmdeploy/tokenizer.py", line 366, in encode
| encoded = self.model.encode(s,
| File "/opt/py3/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2791, in encode
| encoded_inputs = self.encode_plus(
| File "/opt/py3/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3210, in encode_plus
|return self._encode_plus(
| File "/opt/py3/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 603, in _encode_plus
| batched_output = self._batch_encode_plus(
| File "/opt/py3/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 529, in _batch_encode_plus
| encodings = self._tokenizer.encode_batch(
| TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
+------------------------------------
The text was updated successfully, but these errors were encountered:
Checklist
Describe the bug
Hello I am currently working on a project where I use Llama 3.1. This week Meta released Llama 3.3 70B. I tried to run with the latest version of lmdeploy from docker. I also tried to inherit the container and do an upgrade on transformers.
I think there is a change within the llama tokenizer that I can't seem to pinpoint.
| TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
Reproduction
Environment
Error traceback
The text was updated successfully, but these errors were encountered: