8000 Can't get remote Ollama instance to work · Issue #27 · ymichael/open-codex · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Can't get remote Ollama instance to work #27

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
itskenny0 opened this issue Apr 28, 2025 · 10 comments
Open

Can't get remote Ollama instance to work #27

itskenny0 opened this issue Apr 28, 2025 · 10 comments

Comments

@itskenny0
Copy link
itskenny0 commented Apr 28, 2025

Thanks a lot for this project! I'm trying to get it running with my local Ollama instance, but refuses the request:

https://asciinema.org/a/C3xvq5V69tvdo2X2ndiaZTAG2

OLLAMA_BASE_URL=http://192.168.69.3:11434 open-codex --provider ollama --model Qwen2.5-Coder
╭──────────────────────────────────────────────────────────────╮
│ ● OpenAI Codex (research preview) v0.1.30                    │
╰──────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────╮
│ localhost session: 03ca086866f14ace889b1e7949d23cb2          │
│ ↳ workdir: ~/folder                                          │
│ ↳ model: Qwen2.5-Coder                                       │
│ ↳ approval: suggest                                          │
╰──────────────────────────────────────────────────────────────╯
user
test

    codex
    ⚠️  OpenAI rejected the request. Error details: Status: 404, Code: unknown, Type: unknown, Message: 404 404 page not found. Please verify your settings and try again.
Apr 28 08:39:16 hydrazine ollama[243499]: time=2025-04-28T08:39:16.857Z level=INFO source=images.go:458 msg="total blobs: 259"
Apr 28 08:39:16 hydrazine ollama[243499]: time=2025-04-28T08:39:16.861Z level=INFO source=images.go:465 msg="total unused blobs removed: 0"
Apr 28 08:39:16 hydrazine ollama[243499]: time=2025-04-28T08:39:16.864Z level=INFO source=routes.go:1299 msg="Listening on [::]:11434 (version 0.6.6)"
Apr 28 08:39:16 hydrazine ollama[243499]: time=2025-04-28T08:39:16.865Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Apr 28 08:39:17 hydrazine ollama[243499]: time=2025-04-28T08:39:17.127Z level=INFO source=types.go:130 msg="inference compute" id=GPU-ae8180b6-cd8e-33b7-6987-a21c0940b2eb library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1080" total="7.9 GiB" available="7.8 GiB"
Apr 28 08:43:14 hydrazine ollama[243499]: [GIN] 2025/04/28 - 08:43:14 | 200 |    1.439048ms |   192.168.69.15 | GET      "/"
Apr 28 08:43:44 hydrazine ollama[243499]: [GIN] 2025/04/28 - 08:43:44 | 404 |       7.003µs |   192.168.69.15 | GET      "/models"
Apr 28 08:43:50 hydrazine ollama[243499]: [GIN] 2025/04/28 - 08:43:50 | 404 |       5.401µs |   192.168.69.15 | POST     "/chat/completions"
Apr 28 08:44:42 hydrazine ollama[243499]: [GIN] 2025/04/28 - 08:44:42 | 404 |       5.911µs |   192.168.69.15 | GET      "/models"
Apr 28 08:44:56 hydrazine ollama[243499]: [GIN] 2025/04/28 - 08:44:56 | 404 |        5.18µs |   192.168.69.15 | POST     "/chat/completions"
Apr 28 08:46:38 hydrazine ollama[243499]: [GIN] 2025/04/28 - 08:46:38 | 200 |      33.553µs |   192.168.69.15 | GET      "/"
Apr 28 08:46:55 hydrazine ollama[243499]: [GIN] 2025/04/28 - 08:46:55 | 404 |       6.182µs |   192.168.69.15 | GET      "/models"
Apr 28 08:46:57 hydrazine ollama[243499]: [GIN] 2025/04/28 - 08:46:57 | 404 |       5.761µs |   192.168.69.15 | POST     "/chat/completions"
Apr 28 08:47:00 hydrazine ollama[243499]: [GIN] 2025/04/28 - 08:47:00 | 404 |       6.462µs |   192.168.69.15 | POST     "/chat/completions"
Apr 28 08:47:01 hydrazine ollama[243499]: [GIN] 2025/04/28 - 08:47:01 | 404 |       5.931µs |   192.168.69.15 | POST     "/chat/completions"
root@hydrazine:~# ollama pull qwen2.5-coder
pulling manifest
pulling 60e05f210007: 100% 
pulling 66b9ea09bd5b: 100% 
pulling e94a8ecb9327: 100% 
pulling 832dd9e00a68: 100% 
pulling d9bb33f27869: 100% 
verifying sha256 digest
writing manifest
success
root@hydrazine:~# ollama list | grep -i qwen
Qwen2.5-Coder:latest                             2b0496514337    4.7 GB    56 minutes ago
qwen:latest                                      d53d04290064    2.3 GB    12 months ago
root@hydrazine:~# ollama --version
ollama version is 0.6.6

Am I missing something obvious here? Thanks!

@Shubham-Sahoo
Copy link
Shubham-Sahoo commented May 2, 2025

Hey @itskenny0! Can you try this :
OLLAMA_BASE_URL=http://192.168.69.3:11434/v1 open-codex --provider ollama --model Qwen2.5-Coder

@dviresh93
Copy link

having the same issue, tried ur suggestion but still the same issue

Image

@Shubham-Sahoo
Copy link

having the same issue, tried ur suggestion but still the same issue

Image

I think the Codex error -> OpenAI rejected the request is resolved but there can be CORS issue with ollama.

  1. Can you provide ollama logs? You can use journalctl for ollama.service logs.
  2. Also you can check if /etc/systemd/system/ollama.service file contains
    Environment=OLLAMA_ORIGINS=*.

@dviresh93
Copy link

Image

this is what i have,

below is not set:
/etc/systemd/system/ollama.service file contains
Environment=OLLAMA_ORIGINS=*.

will try to update that and see if see if I am still facing the issue. Do you know how does that environment variable gets set?

@Shubham-Sahoo
Copy link

After the CORS been resolved with ollama.service file and restarting the service. You have to run open-codex using this command:
OLLAMA_BASE_URL=http://192.168.69.3:11434/v1 open-codex --provider ollama --model Qwen2.5-Coder

@dviresh93
Copy link

tried the setting you mentioned, and this time its stuck at "thinking..." , so I am assuming codex was not able

Image

Image

is there anything I am missing? how can i look at detailed logs?

@dviresh93
Copy link

does it work on your system? could it be some configuration mismatch? is there an option to run this in docker? and still have it index the code and retain all the original functionality?

@Shubham-Sahoo
Copy link
Shubham-Sahoo commented May 6, 2025

does it work on your system? could it be some configuration mismatch? is there an option to run this in docker? and still have it index the code and retain all the original functionality?

You can try using ngrok to expose your ollama service and then try the link in browser, it should print
ollama is running

If all works fine you can use this link instead of http://192.168.69.3:11434/v1 as
OLLAMA_BASE_URL=<URL>/v1

@Shubham-Sahoo
Copy link

Can you add these and retry?

Environment=OLLAMA_HOST=0.0.0.0:11434
Environment=OLLAMA_ORIGINS=*
Environment=OLLAMA_DEBUG=1

@akaguny
Copy link
akaguny commented May 8, 2025

@dviresh93 , are you run it in the docker?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
0