8000 GitHub - cassettesgoboom/kalavai-client: Turn any computing devices into your own AI cluster
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

cassettesgoboom/kalavai-client

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

alt text

GitHub Release GitHub download count GitHub contributors Visitors GitHub Repo stars

Turn everyday devices into your own AI cluster

Kalavai is a self-hosted platform that turns everyday devices into your very own AI cluster. Home desktops, gaming laptops, work computers, cloud VMs...Aggregate resources from multiple machines and say goodbye to CUDA out of memory errors. Deploy your favourite open source LLM, fine tune it with your own data, or simply run your distributed work, zero-devops. Simple. Private. Yours.

This repository contains:

Join our mailing list for release updates and priority access to new features!

What can Kalavai do?

Kalavai is a platform for distributed computing, and thus it supports a wide range of tasks. Here is a growing list of examples:

Install

Requirements

  • A laptop, desktop or Virtual Machine
  • Admin / privileged access (eg. sudo access in linux)
  • Running a compatible Operative system (see compatibility matrix)

One-line installer

To install the kalavai CLI, run the following command:

curl -sfL https://raw.githubusercontent.com/kalavai-net/kalavai-client/main/installer/install_client.sh | bash -

kalavai CLI

Manage your AI cluster or deploy and monitor jobs with the kalavai CLI:

$ kalavai --help

usage: cli.py [-h] command ...

positional arguments:
  command
    start        Start Kalavai cluster and start/resume sharing resources.
    token        Generate a join token for others to connect to your cluster
    join         Join Kalavai cluster and start/resume sharing resources.
    stop         Stop sharing your device and clean up. DO THIS ONLY IF YOU WANT TO REMOVE KALAVAI-CLIENT from your
                 device.
    pause        Pause sharing your device and make your device unavailable for kalavai scheduling.
    resume       Resume sharing your device and make device available for kalavai scheduling.
    resources    Display information about resources on the cluster
    nodes        Display information about nodes connected
    diagnostics  Run diagnostics on a local installation of kalavai, and stores in log file
    job

options:
  -h, --help  show this help message and exit

To get started, check our quick start guide.

How it works?

To create an AI cluster, you need a seed node which acts as a control plane. It handles bookkeeping for the cluster. With a seed node, you can generate join tokens, which you can share with other machines --worker nodes.

The more worker nodes you have in a cluster, the bigger workloads it can run. Note that the only requirement for a fully functioning cluster is a single seed node.

Once you have a cluster running, you can easily deploy workloads using template jobs. These are community integrations that let users deploy jobs, such as LLM deployments or LLM fine tuning. A template makes using Kalavai really easy for end users, with a parameterised interface, and it also makes the platform infinitely expandable.

Cluster quick start

Kalavai is free to use, no caps, for both commercial and non-commercial purposes. All you need to get started is one or more computers that can see each other (i.e. within the same network), and you are good to go. If you wish to join computers in different locations / networks, check our managed kalavai offering.

Create a seed node

Self-hosted Kalavai

Simply use the CLI to start your seed node:

kalavai start

Note that it will take a few minutes to setup and download all dependencies. Check the status of your cluster with:

kalavai diagnostics

Managed Kalavai

Interested in a fully managed, hosted Kalavai server? Register your interest and get on top of the list. Note: the first 100 to register will get a massive discount!

Wait, isn't Kalavai free and runs on my computer? Why would I need a hosted solution? Kalavai offers a fully managed, hosted seed node(s), so you can overcome some of the limitations of running it yourself. Use managed Kalavai if:

  • Your devices are not in the same local network
  • You want to access your cluster remotely
  • You want a high availability cluster --no downtime!

Add a worker node

In the seed node, generate a join token:

kalavai token

Copy the displayed token. On the worker node, run:

kalavai join <token>

Note that worker nodes must be able to see the seed node; this could be achieved using a public IP on the seed node or by having both computers on the same local network. After some time, you should be able to see the new node:

kalavai nodes list

You can also see the total resources available:

kalavai resources

Enough already, let's run stuff!

Check our examples to put your new AI cluster to good use!

Compatibility matrix

If your system is not currently supported, open an issue and request it. We are expanding this list constantly.

OS compatibility

Currently compatible and tested OS:

  • Debian-based linux (such as Ubuntu)

Currently compatible (untested. Interested in testing them?):

  • Fedora
  • RedHat
  • Any distro capable of installing .deb and .rpm packages.

Coming soon:

  • Windows 10+ with WSL

Currently not compatible:

  • MacOS

Hardware compatibility:

  • amd64 or x86_64 CPU architecture
  • (optional) NVIDIA GPU
  • AMD and Intel GPUs are currently not supported (yet!)

Roadmap

  • Kalavai client on Linux
  • [TEMPLATE] Distributed LLM deployment
  • [TEMPLATE] Distributed LLM fine tuning
  • Automagic scale resources from public cloud
  • Kalavai client on Windows
  • Kalavai client on Mac
  • Ray cluster support

Anything missing here? Give us a shout in the discussion board

Contribute

About

Turn any computing devices into your own AI cluster

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 98.3%
  • Dockerfile 1.7%
0