8000 GitHub · Where software is built
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
token length issue #41
Open
Open
@TimDaub

Description

@TimDaub

I use gpt4 and sometimes when my message is pretty long and I start running into this error below once, but I then adjust my input so that it is within he token length of 8192, the prompt will go through but it'll then not produce a very long response, e.g here and resurface that error

Screenshot 2023-07-01 at 21 33 18

happened to me twice now.

Request Error. The last prompt was not saved: <class 'openai.error.InvalidRequestError'>: This
model's maximum context length is 8192 tokens. However, your messages resulted in 8205 tokens.
Please reduce the length of the messages.
This model's maximum context length is 8192 tokens. However, your messages resulted in 8205 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
  File "/Users/[username]/Projects/gpt-cli/gptcli/session.py", line 101, in _respond
    for response in completion_iter:
  File "/Users/[username]/Projects/gpt-cli/gptcli/openai.py", line 20, in complete
    openai.ChatCompletion.create(
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8205 tokens. Please reduce the length of the messages.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0