8000 GitHub - jieyoujun/llama2.go: LLAMA-2 in pure Go
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

jieyoujun/llama2.go

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llama2.go

Go Report Card codecov Go Reference

Native Go version of llama2.c.

It is pure Go inference code ported from experimental implementation by Andrej Karpathy of latest as of 2023-07-25 LLM model from Meta LLAMA-2.

How to run?

  1. get tokenizer.bin from llama2.c (included)
  2. get weights wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin
  3. go install github.com/nikolaydubina/llama2.go@latest
  4. llama2.go -checkpoint=stories110M.bin -prompt="good morning said sun to trees"
$ llama2.go -checkpoint=stories110M.bin -prompt="good morning said sun to trees"
2023/07/29 09:30:22 config: llama2.Config{Dim:768, HiddenDim:2048, NumLayers:12, NumHeads:12, NumKVHeads:12, VocabSize:32000, SeqLen:1024}
<s>
good morning said sun to trees: "Let's organize an operation!"
The trees clapped their branches and asked "What will we do?"
Badger smiled and replied "We will build a treehouse together!"
The trees got blocks of wood and started to build. Badger put nails in the tiny pieces of wood, while the trees put the blocks together to make a
 solid base. 
When they finished their treehouse, Goodger and the trees sat inside. Badger said, "Look how fancy we made it!"
The trees smiled and nodded. They said, "It's very fancy! Thank you for helping us organize this operation." 
Then they lived happily in their fancy treehouse together!
<s>
Once upon a time, there was a boy named Timmy. Timmy was very hungry and wanted to eat his meal. He asked his mom, "What are we having for dinner
?" His mom said, "We are having chicken and rice." Timmy said, "Yum! I love chicken and rice."
While they were eating, Timmy's dad came in and said, "Hey Timmy, do you want to watch a movie after
2023/07/29 09:30:58 achieved tok/s: 28.619646

Differences from llama2.c

  • for checkpoint not using mmap, instead scanning file

Performance

model llama2.c llama2.go (simple) llama2.go (fast)
stories42M.bin 265.348595 tok/s 25.677383 tok/s 82.793488 tok/s
stories110M.bin 101.837061 tok/s 10.474615 tok/s 39.280158 tok/s

Optimizations

  • transformer steps parallelism
  • loop unrolling
  • in-matrix parallelism
  • (todo) SIMD

Optimizations are Fuzz-tested against basic correct algorithm. To disable optimizations update llama2/transformer.go import to use package without optimizations and rebuild.

package llama2

import (
	"math"
	"sync"

	"github.com/nikolaydubina/llama2.go/nn"
)

Related Work

About

LLAMA-2 in pure Go

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 100.0%
0