8000 GitHub - thatrandomlizarddoggo/kobold_cpp_tutorial: a quick and eazy guide on how to use kobold cpp
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

thatrandomlizarddoggo/kobold_cpp_tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 

Repository files navigation

KoboldCPP Quickstart Guide

here's a quick guide on how to use KoboldCPP, an easy-to-use LLM inference tool for running GGUF models locally.

Installation

Download KoboldCPP from the official releases page:
🔗 KoboldCPP Releases (GitHub)
Make sure to get the right version for your gpu!

Install requests library

Windows

pip install requests

Linux

python3 -m pip install

Running a Model

Windows (PowerShell/CMD)

./koboldcpp.exe --model "model.gguf" --threads 4 --port 5001 --contextsize 4096

Linux

./koboldcpp --model "model.gguf" --threads 4 --port 5001 --contextsize 4096 

gpu support

While using gpu you may need to add another argument like:

 --usecublas

About

a quick and eazy guide on how to use kobold cpp

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

0