8000 Stuck on "Initializing LlamaService..." · Issue #147 · elizaOS/eliza-starter · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Stuck on "Initializing LlamaService..." #147

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
YakovL opened this issue Feb 24, 2025 · 14 comments
Open

Stuck on "Initializing LlamaService..." #147

YakovL opened this issue Feb 24, 2025 · 14 comments

Comments

@YakovL
Copy link
YakovL commented Feb 24, 2025

While this kinda duplicates #127, in my case it is not resolved yet, and I can't reopen that issue, so let me create a new one.

Note: I'm aware of the comment that suggests using eliza instead of eliza-starter fixes the issue. I'm exploring that path, but this is the eliza-starter repo, so I consider the issue valid here.

Symptom: on start, project is stuck on the "Initializing LlamaService..." message. That is, on an Ubuntu server using Docker, while using the same setup on Windows (and no Docker) works fine (it's enough to press enter to get things going).

More details:

  • I'm on main, b2dcaaf;
  • I've changed:
    • docker-compose.yaml: command: ["pnpm", "start"] instead of using --character argument, set OPENROUTER_API_KEY, USE_OPENAI_EMBEDDING=TRUE, commented out other env variables (except SERVER_PORT), updated ports to avoid a conflict with another service (custom port instead of 3000);
    • src/character.ts: uncommented name, plugins (empty), clients (empty), added modelProvider: ModelProviderName.OPENAI;
    • src/index.ts: commented out lines that import and use solanaPlugin;
  • I run it with sudo docker compose up;
  • Ubuntu 22.04.4 LTS, Docker version 27.5.1
@prxshetty
Copy link
prxshetty commented Feb 25, 2025

getting the same issue. Is this fixed?
I am running this on my Windows Machine ( no Docker). Any suggestions?

Below is the Error i got after pressing Enter.

 style: { all: [Array], chat: [Array], post: [Array] },
    adjectives: [
      'brilliant',     'enigmatic',      'technical',
      'witty',         'sharp',          'cunning',
      'elegant',       'insightful',     'chaotic',
      'sophisticated', 'unpredictable',  'authentic',
      'rebellious',    'unconventional', 'precise',
      'dynamic',       'innovative',     'cryptic',
      'daring',        'analytical',     'playful',
      'refined',       'complex',        'clever',
      'astute',        'eccentric',      'maverick',
      'fearless',      'cerebral',       'paradoxical',
      'mysterious',    'tactical',       'strategic',
      'audacious',     'calculated',     'perceptive',
      'intense',       'unorthodox',     'meticulous',
      'provocative'
    ],
    extends: []
  }
]
[2025-02-25 17:30:40] INFO: Eliza(b850bc30-45f8-0041-a00a-83df46d8555d) - Initializing AgentRuntime with options:
    character: "Eliza"
    modelProvider: "llama_local"
    characterModelProvider: "llama_local"
You: [2025-02-25 17:30:40] INFO: Eliza(b850bc30-45f8-0041-a00a-83df46d8555d) - Setting Model Provider:
    characterModelProvider: "llama_local"
    optsModelProvider: "llama_local"
    finalSelection: "llama_local"
[2025-02-25 17:30:40] INFO: Eliza(b850bc30-45f8-0041-a00a-83df46d8555d) - Selected model provider: llama_local
[2025-02-25 17:30:40] INFO: Eliza(b850bc30-45f8-0041-a00a-83df46d8555d) - Selected image model provider: llama_local
[2025-02-25 17:30:40] INFO: Eliza(b850bc30-45f8-0041-a00a-83df46d8555d) - Selected image vision model provider: llama_local
[2025-02-25 17:30:40] INFO: Initializing LlamaService...

You:
You:
[2025-02-25 17:32:42] WARN: Invalid embedding input:
    input: " "
    type: "string"
    length: 1

file:///C:/Users/prxsh/Downloads/projets/eliza-starter/node_modules/.pnpm/@elizaos+adapter-sqlite@0.1.9_@google-cloud+vertexai@1.9.2_@langchain+core@0.3.40_openai@4.85_6ooaxzpgrdqvleclp6vuw3onsy/node_modules/@elizaos/adapter-sqlite/dist/index.js:386
    const memories = this.db.prepare(sql).all(...queryParams);
                                          ^
SqliteError: Error reading 2nd vector: zero-length vectors are not supported.
    at SqliteDatabaseAdapter.searchMemoriesByEmbedding (file:///C:/Users/prxsh/Downloads/projets/eliza-starter/node_modules/.pnpm/@elizaos+adapter-sqlite@0.1.9_@google-cloud+vertexai@1.9.2_@langchain+core@0.3.40_openai@4.85_6ooaxzpgrdqvleclp6vuw3onsy/node_modules/@elizaos/adapter-sqlite/dist/index.js:386:43)
    at SqliteDatabaseAdapter.createMemory (file:///C:/Users/prxsh/Downloads/projets/eliza-starter/node_modules/.pnpm/@elizaos+adapter-sqlite@0.1.9_@google-cloud+vertexai@1.9.2_@langchain+core@0.3.40_openai@4.85_6ooaxzpgrdqvleclp6vuw3onsy/node_modules/@elizaos/adapter-sqlite/dist/index.js:303:42)
    at MemoryManager.createMemory (file:///C:/Users/prxsh/Downloads/projets/eliza-starter/node_modules/.pnpm/@elizaos+core@0.1.9_@google-cloud+vertexai@1.9.2_@langchain+core@0.3.40_openai@4.85.4_ws@8.18_6s5elpywqiznxm4ea4qzd3jg44/node_modules/@elizaos/core/dist/index.js:38131:40)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async file:///C:/Users/prxsh/Downloads/projets/eliza-starter/node_modules/.pnpm/@elizaos+client-direct@0.1.9_@google-cloud+vertexai@1.9.2_@langchain+core@0.3.40_openai@4.85._axgmgmrx534enewk4ab2urbbpe/node_modules/@elizaos/client-direct/dist/index.js:4625:9 {
  code: 'SQLITE_ERROR'
}

Node.js v22.13.1
 ELIFECYCLE  Command failed with exit code 1.
PS C:\Users\prxsh\Downloads\projets\eliza-starter>`

@eliveikis
Copy link

Same issue here, windows 11, wsl2 w/ default ubuntu (24 LTS I think). When I hit enter at "Initializing LlamaService..." and then typed "Hello" in the you prompt, I got the following:

8000
[2025-02-28 17:44:42] INFO: Initializing LlamaService...

You: Hello.
Downloading fast-bge-small-en-v1.5 [====================] 100% 0.0s
[2025-02-28 17:45:57] INFO: Generating text with options:
    modelProvider: "llama_local"
    model: "large"
    verifiableInference: false
[2025-02-28 17:45:57] INFO: Selected model: NousResearch/Hermes-3-Llama-3.1-8B-GGUF/resolve/main/Hermes-3-Llama-3.1-8B.Q8_0.gguf?download=true
[2025-02-28 17:45:57] INFO: Model not initialized, starting initialization...
[2025-02-28 17:45:57] INFO: Checking model file...
[2025-02-28 17:45:57] INFO: Model file not found, starting download...
[2025-02-28 17:45:57] INFO: Following redirect to: https://cdn-lfs-us-1.hf.co/repos/97/cc/97ccd42703c9e659e537d359fa1863623f9371054d9ea5e4dd163214b7803ad1/c77c263f78b2f56fbaddd3ef2af750fda6ebb4344a546aaa0bfdd546b1ca8d84?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27Hermes-3-Llama-3.1-8B.Q8_0.gguf%3B+filename%3D%22Hermes-3-Llama-3.1-8B.Q8_0.gguf%22%3B&Expires=1740768357&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0MDc2ODM1N319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zLzk3L2NjLzk3Y2NkNDI3MDNjOWU2NTllNTM3ZDM1OWZhMTg2MzYyM2Y5MzcxMDU0ZDllYTVlNGRkMTYzMjE0Yjc4MDNhZDEvYzc3YzI2M2Y3OGIyZjU2ZmJhZGRkM2VmMmFmNzUwZmRhNmViYjQzNDRhNTQ2YWFhMGJmZGQ1NDZiMWNhOGQ4ND9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoifV19&Signature=ouTbhI0R%7EL7nLfJJKHygVEPbEUymuJM45SEf0x8szP3QI7pwBq7rm7Jrz2Z8-U2oYcll7dv1m9ufXKNzav8%7EzIayAtBH7tIYR0E0pUEKHiUDTKB0hUwMm-GdrBNFN2puSQuFoD0DuWRt4pTSKaZ7t8D6kaLlqWKAHzFytZeaOa36d05-t-eWR3-61muRpd-3Nz90BCiMjWv3vyvKUjhrnhJ9edllxLiDWVFmKJ9cSYVU6TRpHHTfASDmBYoefXN1u16axjUK62-lsNlRvf8tBIBLRP0AspjZ7sMepGpDm5PxXVlPoH54rvTvFrVsyOSeT%7EAZxr-lTiHhYhA4Je7nJA__&Key-Pair-Id=K24J24Z295AEI9
[2025-02-28 17:45:57] INFO: Downloading model: Hermes-3-Llama-3.1-8B.Q8_0.gguf
[2025-02-28 17:45:57] INFO: Download location: model.gguf
[2025-02-28 17:45:57] INFO: Total size: 8145.11 MB

At which point I exited the program, ran pnpm start again, and it gets stuck on "Initalizing LlamaService..." again. Which results in a model load error:

[2025-02-28 17:47:39] INFO: Initializing LlamaService...

You: Hello.
[2025-02-28 17:48:11] INFO: Generating text with options:
    modelProvider: "llama_local"
    model: "large"
    verifiableInference: false
[2025-02-28 17:48:11] INFO: Selected model: NousResearch/Hermes-3-Llama-3.1-8B-GGUF/resolve/main/Hermes-3-Llama-3.1-8B.Q8_0.gguf?download=true
[2025-02-28 17:48:11] INFO: Model not initialized, starting initialization...
[2025-02-28 17:48:11] INFO: Checking model file...
[2025-02-28 17:48:11] WARN: Model already exists.
[2025-02-28 17:48:11] WARN: LlamaService: No CUDA detected - local response will be slow
[2025-02-28 17:48:11] INFO: Initializing Llama instance...
(node:13440) ExperimentalWarning: `--experimental-loader` may be removed in the future; instead use `register()`:
--import 'data:text/javascript,import { register } from "node:module"; import { pathToFileURL } from "node:url"; register("ts-node/esm", pathToFileURL("./"));'
(Use `node --trace-warnings ...` to show where the warning was created)
(node:13440) [DEP0180] DeprecationWarning: fs.Stats constructor is deprecated.
(Use `node --trace-deprecation ...` to show where the warning was created)
[2025-02-28 17:48:11] INFO: Creating JSON schema grammar...
[2025-02-28 17:48:11] INFO: Loading model...
[2025-02-28 17:48:12] ERROR: Model initialization failed. Deleting model and retrying:
[node-llama-cpp] llama_model_load: error loading model: tensor 'blk.22.attn_q.weight' data is not within the file bounds, model is corrupted or incomplete
[node-llama-cpp] llama_load_model_from_file: failed to load model

@YakovL
Copy link
Author
YakovL commented Feb 28, 2025

@eliveikis this sounds like a different issue (i.e. an issue with downloading/using a model), so you may want to create a separate issue to make it more visible

@eliveikis
Copy link
eliveikis commented Mar 1, 2025

@YakovL Yes perhaps, can you try typing "hello" in your prompt? When I typed a single space, as you seem to have done, I appear to have the same error as you: SqliteError: Error reading 2nd vector: zero-length vectors are not supported..

Typing "hello" on your end may uncover if I have the same issue.

@YakovL
Copy link
Author
YakovL commented Mar 1, 2025

@eliveikis in my case, there's no prompt at all, typing anything or pressing enter doesn't produce any effect. It's literally stuck on the "Initializing LlamaService..." message

@giuspe01
Copy link
giuspe01 commented Mar 1, 2025

I have the same issue, stuck on "initializing llama..." but via api respond with the same message at each request...

@prxshetty
Copy link

I will try reproducing this on another machine and let you guys know.

@madjin
Copy link
Contributor
madjin commented Mar 5, 2025

I'm having this issue with adapter-sqlite + ingesting knowledge via folder

@ryanm
Copy link
ryanm commented Mar 9, 2025

I found if I simply entered a message and pressed Enter, I got a response. After that the "You:" prefix appears as expected. So I think "Initalizing LlamaService..." isn't stuck. You just don't see a "You:" before your very first message.

@YakovL
Copy link
Author
YakovL commented Mar 10, 2025

@ryanm this depends on the environment. On Windows without Docker, for instance, I have to press enter once to "You:" to appear and the dialog to start working. But in the case I described, that doesn't work. I'm not 100% sure that I tried typing things and pressing enter, but I think I tried. The project that was relying on Eliza was canceled already, though, so I'm not actively exploring the issue.

@Redsun777
Copy link

Same problem here. I'm trying to deploy on Railway, and it's stuck on LlamaService. I haven't found a solution yet.

[2025-03-16 08:13:58] INFO: Initializing LlamaService...

@himanshu-sugha
Copy link

same error

@yungyoda
Copy link

I can run the starter locally as mentioned above, i have to press enter after INFO: Initializing LlamaService... but on Railway i cant do that exactly and it remains stuck in the deployment phase.

@TrueJu
Copy link
TrueJu commented Apr 14, 2025

@yungyoda I tried your solution but unfortunately it doesn't change anything. I keep seeing the line [2025-03-16 08:13:58] INFO: Initializing LlamaService... no matter what I type in or press. Have there been any other solutions that worked for any of you with the recent version?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants
0