8000 llama_cpp compiled even when feature switched off · Issue #12 · ShelbyJenkins/llm_client · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

llama_cpp compiled even when feature switched off #12

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
johnny-smitherson opened this issue Feb 23, 2025 · 0 comments
Open

llama_cpp compiled even when feature switched off #12

johnny-smitherson opened this issue Feb 23, 2025 · 0 comments

Comments

@johnny-smitherson
Copy link
johnny-smitherson commented Feb 23, 2025

This crate would also be very useful when used only with openapi-compatible network backends (e.g. ollama, cloud, etc.)
without actually having GPU or CUDA on the machine. For example, when running on a laptop that can access ollama from LAN, but can't compile it locally. The workflow parts of the crate would still be useful, even without direct access to GPU, right?

The feature switches on the llm_client crate do not seem to switch off compilation of llama_cpp. Compilation is still attempted in llm_devices::llm_binary as seen below.

With the following configuration:

llm_client = { git = "https://github.com/ShelbyJenkins/llm_client", rev = "7ac83d5656e40aa55267660cb7e9f8fb33ac7807", default-features = false, features=[] }

One gets the following logs:

warning: llm_interface@0.0.3: Failed to build llama_cpp: Build failed for D:\the_repo\target\llama_cpp in 0.4226677 seconds
error: failed to run custom build command for `llm_interface v0.0.3 (https://github.com/ShelbyJenkins/llm_client?rev=7ac83d5656e40aa55267660cb7e9f8fb33ac7807#7ac83d56)`

Caused by:
  process didn't exit successfully: `D:\the_repo\target\debug\build\llm_interface-48135458c2729187\build-script-build` (exit code: 1)
  --- stdout
  cargo:rerun-if-changed=build.rs
  Starting D:\the_repo\target\llama_cpp.build Logger
  2025-02-22T23:53:52.493704Z TRACE llm_devices::llm_binary::binary: The system cannot find the path specified. (os error 3)
  2025-02-22T23:53:52.493987Z TRACE llm_devices::llm_binary: Directory D:\the_repo\target\llama_cpp does not exist, skipping removal
  2025-02-22T23:53:52.523390Z TRACE llm_devices::llm_binary::build: Cmake found: cmake version 3.30.4

  CMake suite maintained and supported by Kitware (kitware.com/cmake).

  2025-02-22T23:53:52.524574Z TRACE llm_devices::llm_binary::download: Downloading from https://github.com/ggml-org/llama.cpp/archive/refs/tags/b4735.zip to D:\the_repo\target\llama_cpp\llama.zip
  2025-02-22T23:53:52.538833Z ERROR llm_devices::llm_binary: Build failed: Download failed
  stderr:
   stdout:
  2025-02-22T23:53:52.539407Z TRACE llm_devices::llm_binary::binary: Could not find llama-server.exe in D:\the_repo\target\llama_cpp\llama.zip
  2025-02-22T23:53:52.539817Z TRACE llm_devices::llm_binary: Successfully removed directory D:\the_repo\target\llama_cpp
  2025-02-22T23:53:52.540072Z ERROR llm_devices::llm_binary: Build failed: Build failed for D:\the_repo\target\llama_cpp
  cargo:warning=Failed to build llama_cpp: Build failed for D:\the_repo\target\llama_cpp in 0.4226677 seconds
warning: build failed, waiting for other jobs to finish...

Could the above attempt to compile llama_cpp be skipped when feature llm_client/llama_cpp_backend is not present ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant
0