10000 GitHub - ianhhlee/llm4s: Scala 3 bindings for llama.cpp
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

ianhhlee/llm4s

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llm4s

Sonatype Nexus (Releases)

Experimental Scala 3 bindings for llama.cpp using Slinc.

Setup

Add llm4s to your build.sbt:

libraryDependencies += "com.donderom" %% "llm4s" % "0.10.0"

For JDK 17 add .jvmopts file in the project root:

--add-modules=jdk.incubator.foreign
--enable-native-access=ALL-UNNAMED

Version compatibility:

llm4s Scala JDK llama.cpp (commit hash)
0.10+ 3.3.0 17, 19 49e7cb5 (Jul 31)
Older versions
llm4s Scala JDK llama.cpp (commit hash)
0.6+ --- --- 49e7cb5 (Jul 31)
0.4+ --- --- 70d26ac (Jul 23)
0.3+ --- --- a6803ca (Jul 14)
0.1+ 3.3.0-RC3 17, 19 447ccbe (Jun 25)

Usage

import java.nio.file.Paths
import com.donderom.llm4s.*

System.load("path/to/libllama.so")
val model = Paths.get("path/to/ggml-model.bin")
val contextParams = ContextParams(threads = 6)
val prompt = "Deep learning is "

Completion

val llm = Llm(model = model, params = contextParams)
val params = LlmParams(context = contextParams, predictTokens = 256)

// To print generation as it goes
llm(prompt, params).foreach: stream =>
  stream.foreach: token =>
    print(token)

// Or build a string
llm(prompt, params).foreach(stream => println(stream.mkString))

llm.close()

Embeddings

val embedding = Embedding(model = model, params = contextParams)
embedding(prompt, contextParams).foreach: embeddings =>
  embeddings.foreach: embd =>
    print(embd)
    print(' ')
embedding.close()

About

Scala 3 bindings for llama.cpp

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Scala 100.0%
0