8000 Add Local Model Support by yilinxia · Pull Request #449 · EvgSkv/logica · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Add Local Model Support #449

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 25, 2025
Merged

Add Local Model Support #449

merged 1 commit into from
May 25, 2025

Conversation

yilinxia
Copy link
Contributor

To use a local model, users must provide a GGUF file as input.

Models compatible with llama.cpp and Hugging Face are supported.

Instructions:

Here is the instruction

brew install llama.cpp

llama-cli -hf bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0

After running the command, the logs will indicate where the GGUF file has been saved.
Move the GGUF file to the same directory as your Jupyter Notebook or Google Colab environment.

@EvgSkv
Copy link
Owner
EvgSkv commented May 25, 2025

Thank you very much, Yilin!

@EvgSkv EvgSkv merged commit bca5867 into EvgSkv:main May 25, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0