8000 GitHub - EvanZhouDev/llm.pdf: Run LLMs inside a PDF file.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

EvanZhouDev/llm.pdf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Run LLMs inside a PDF file.

Watch how llm.pdf was built on YouTube.

What is llm.pdf?

This is a proof-of-concept project, showing that it's possible to run an entire Large Language Model in nothing but a PDF file.

It uses Emscripten to compile llama.cpp into asm.js, which can then be run in the PDF using an old PDF JS injection.

Combined with embedding the entire LLM file into the PDF with base64, we are able to run LLM inference in nothing but a PDF.

Watch the video on YouTube to learn the full story!

Load a Custom Model in the PDF

The scripts/generatePDF.py file will help you create a PDF with any compatible LLM.

The easiest way to get started is with the following command:

cd scripts
python3 generatePDF.py --model "path/for/model.gguf" --output "path/to/output.pdf"

Choosing a Model

Here's the general guidelines when picking a model:

  • Only GGUF quantized models work.
  • Generally, try to use Q8 quantized models, as those run the fastest.
  • For reference, 135M parameter models take around 5s per token input/output. Anything higher will likely be unreasonably slow.

Inspiration and Credits

Thank you to the following for inspiration and reference:

Thank you to the following for creating the tiny LLMs that power llm.pdf:

0