A platform for GPU programming challenges. Write efficient GPU kernels and compare your solutions with other developers!
- Node.js 18+ and npm/yarn/pnpm
- PostgreSQL (for local development) or a Supabase account
- Git
- Clone the repository:
git clone https://github.com/tensara/tensara
cd tensara
- Install dependencies:
npm install
# or
yarn
# or
pnpm install
- Set up your environment variables by copying the example file:
cp .env.example .env
You have a few options for setting up the database:
-
Install PostgreSQL on your machine if you haven't already:
- macOS (using Homebrew):
brew install postgresql
- Linux:
sudo apt-get install postgresql
- Windows: Download from PostgreSQL website
- macOS (using Homebrew):
-
Start PostgreSQL service:
- macOS:
brew services start postgresql
- Linux:
sudo service postgresql start
- Windows: It should start automatically as a service
- macOS:
-
Create a new database:
createdb tensara_db
- Update your
.env
file with local PostgreSQL connection string:
DATABASE_URL="postgresql://your_username@localhost:5432/tensara_db"
-
Create a new project on Supabase
-
Once your project is created, go to Settings > Database to find your connection string
-
Update your
.env
file with the Supabase connection string:
DATABASE_URL="postgresql://postgres:[YOUR-PASSWORD]@db.[YOUR-PROJECT-REF].supabase.co:5432/postgres"
- Push the database schema:
npx prisma db push
- Generate Prisma Client:
npx prisma generate
- Push the problems to the database. Follow the steps in the problems repository.
- Start the development server:
npm run dev
# or
yarn dev
# or
pnpm dev
Your app should now be running at http://localhost:3000!
Create a .env
file in the root directory with the following variables:
# Database connection string
DATABASE_URL="your_connection_string_here"
# NextAuth configuration
NEXTAUTH_SECRET="your_nextauth_secret"
AUTH_GITHUB_ID=""
AUTH_GITHUB_SECRET=""
NEXTAUTH_URL="http://localhost:3000"
# Google Analytics ID (can be ignored until production)
NEXT_PUBLIC_GA_ID=""
MODAL_CHECKER_SLUG=""
MODAL_BENCHMARK_SLUG=""
MODAL_ENDPOINT=""
Thank you to our sponsors who help make Tensara possible:
- Modal - Modal lets you run jobs in the cloud, by just writing a few lines of Python. Customers use Modal to deploy Gen AI models at large scale, fine-tune large language models, run protein folding simulations, and much more.
We use Modal to securely run accurate benchmarks on various GPUs.
Interested in sponsoring? Contact us at sponsor@tensara.org