s3mini
is an ultra-lightweight Typescript client (~14 KB minified, โ15 % more ops/s) for S3-compatible object storage. It runs on Node, Bun, Cloudflare Workers, and other edge platforms. It has been tested on Cloudflare R2, Backblaze B2, DigitalOcean Spaces, and MinIO. (No Browser support!)
- ๐ Light and fast: averages โ15 % more ops/s and only ~14 KB (minified, not gzipped).
- ๐ง Zero dependencies; supports AWS SigV4 (no pre-signed requests).
- ๐ Works on Cloudflare Workers; ideal for edge computing, Node, and Bun (no browser support).
- ๐ Only the essential S3 APIsโimproved list, put, get, delete, and a few more.
- ๐ฆ BYOS3 โ Bring your own S3-compatible bucket (tested on Cloudflare R2, Backblaze B2, DigitalOcean Spaces, MinIO and Garage! Ceph and AWS are in the queue).
Dev:
Performance tests was done on local Minio instance. Your results may vary depending on environment and network conditions, so take it with a grain of salt.
The library supports a subset of S3 operations, focusing on essential features, making it suitable for environments with limited resources.
- โ HeadBucket (bucketExists)
- โ createBucket (createBucket)
- โ ListObjectsV2 (listObjects)
- โ GetObject (getObject, getObjectResponse, getObjectWithETag, getObjectRaw, getObjectArrayBuffer, getObjectJSON)
- โ PutObject (putObject)
- โ DeleteObject (deleteObject)
- โ HeadObject (objectExists, getEtag, getContentLength)
- โ listMultipartUploads
- โ CreateMultipartUpload (getMultipartUploadId)
- โ completeMultipartUpload
- โ abortMultipartUpload
- โ uploadPart
- โ CopyObject: Not implemented (tbd)
npm install s3mini
yarn add s3mini
pnpm add s3mini
To use s3mini
, you need to set up your environment variables for provider credentials and S3 endpoint. Create a .env
file in your project root directory. Checkout the example.env file for reference.
# On Windows, Mac, or Linux
mv example.env .env
โ ๏ธ Environment Support NoticeThis library is designed to run in environments like Node.js, Bun, and Cloudflare Workers. It does not support browser environments due to the use of Node.js APIs and polyfills.
Cloudflare Workers: To enable built-in Node.js Crypto API, add the
nodejs_compat
compatibility flag to your Wrangler configuration file. This also enablesnodejs_compat_v2
as long as your compatibility date is2024-09-23
or later. Learn more about the Node.js compatibility flag and v2.
import { s3mini, sanitizeETag } from 's3mini';
const s3client = new s3mini({
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
endpoint: config.endpoint,
region: config.region,
});
// Basic bucket ops
let exists: boolean = false;
try {
// Check if the bucket exists
exists = await s3client.bucketExists();
} catch (err) {
throw new Error(`Failed bucketExists() call, wrong credentials maybe: ${err.message}`);
}
if (!exists) {
// Create the bucket based on the endpoint bucket name
await s3client.createBucket();
}
// Basic object ops
// key is the name of the object in the bucket
const smallObjectKey: string = 'small-object.txt';
// content is the data you want to store in the object
// it can be a string or Buffer (recommended for large objects)
const smallObjectContent: string = 'Hello, world!';
// check if the object exists
const objectExists: boolean = await s3client.objectExists(smallObjectKey);
let etag: string | null = null;
if (!objectExists) {
// put/upload the object, content can be a string or Buffer
// to add object into "folder", use "folder/filename.txt" as key
// Third argument is optional, it can be used to set content type ... default is 'application/octet-stream'
const resp: Response = await s3client.putObject(smallObjectKey, smallObjectContent);
// example with content type:
// const resp: Response = await s3client.putObject(smallObjectKey, smallObjectContent, 'image/png');
// you can also get etag via getEtag method
// const etag: string = await s3client.getEtag(smallObjectKey);
etag = sanitizeETag(resp.headers.get('etag'));
}
// get the object, null if not found
const objectData: string | null = await s3client.getObject(smallObjectKey);
console.log('Object data:', objectData);
// get the object with ETag, null if not found
const response2: Response = await s3mini.getObject(smallObjectKey, { 'if-none-match': etag });
if (response2) {
// ETag changed so we can get the object data and new ETag
// Note: ETag is not guaranteed to be the same as the MD5 hash of the object
// ETag is sanitized to remove quotes
const etag2: string = sanitizeETag(response2.headers.get('etag'));
console.log('Object data with ETag:', response2.body, 'ETag:', etag2);
} else {
console.log('Object not found or ETag does match.');
}
// list objects in the bucket, null if bucket is empty
// Note: listObjects uses listObjectsV2 API and iterate over all pages
// so it will return all objects in the bucket which can take a while
// If you want to limit the number of objects returned, use the maxKeys option
// If you want to list objects in a specific "folder", use "folder/" as prefix
// Example s3client.listObjects({"/" "myfolder/"})
const list: object[] | null = await s3client.listObjects();
if (list) {
console.log('List of objects:', list);
} else {
console.log('No objects found in the bucket.');
}
// delete the object
const wasDeleted: boolean = await s3client.deleteObject(smallObjectKey);
// Multipart upload
const multipartKey = 'multipart-object.txt';
const large_buffer = new Uint8Array(1024 * 1024 * 15); // 15 MB buffer
const partSize = 8 * 1024 * 1024; // 8 MB
const totalParts = Math.ceil(large_buffer.length / partSize);
// Beware! This will return always a new uploadId
// if you want to use the same uploadId, you need to store it somewhere
const uploadId = await s3client.getMultipartUploadId(multipartKey);
const uploadPromises = [];
for (let i = 0; i < totalParts; i++) {
const partBuffer = large_buffer.subarray(i * partSize, (i + 1) * partSize);
// upload each part
// Note: uploadPart returns a promise, so you can use Promise.all to upload all parts in parallel
// but be careful with the number of parallel uploads, it can cause throttling
// or errors if you upload too many parts at once
// You can also use generator functions to upload parts in batches
uploadPromises.push(s3client.uploadPart(multipartKey, uploadId, partBuffer, i + 1));
}
const uploadResponses = await Promise.all(uploadPromises);
const parts = uploadResponses.map((response, index) => ({
partNumber: index + 1,
etag: response.etag,
}));
// Complete the multipart upload
const completeResponse = await s3client.completeMultipartUpload(multipartKey, uploadId, parts);
const completeEtag = completeResponse.etag;
// List multipart uploads
// returns object with uploadId and key
const multipartUploads: object = await s3client.listMultipartUploads();
// Abort the multipart upload
const abortResponse = await s3client.abortMultipartUpload(multipartUploads.key, multipartUploads.uploadId);
// Multipart download
// lets test getObjectRaw with range
const rangeStart = 2048 * 1024; // 2 MB
const rangeEnd = 8 * 1024 * 1024 * 2; // 16 MB
const rangeResponse = await s3client.getObjectRaw(multipartKey, false, rangeStart, rangeEnd);
const rangeData = await rangeResponse.arrayBuffer();
For more check USAGE.md file, examples and tests.
To run the full e2e test suite locally:
npm install
npm run build
npm run test:e2e
- The e2e tests require real S3-compatible services (see
.env
andexample.env
for required credentials). - The tests are designed to run against actual storage backends (MinIO, Backblaze, Cloudflare R2, DigitalOcean Spaces, etc.).
- Do not edit the test files, do not add unit tests, and do not add mock models. All tests must remain end-to-end and integration focused.
- For CI, see
.github/workflows/test-e2e.yml
for the automated test process.
For more details, see the tests/
directory and the USAGE.md file.
- The library masks sensitive information (access keys, session tokens, etc.) when logging.
- Always protect your AWS credentials and avoid hard-coding them in your application (!!!). Use environment variables. Use environment variables or a secure vault for storing credentials.
- Ensure you have the necessary permissions to access the S3 bucket and perform operations.
- Be cautious when using multipart uploads, as they can incur additional costs if not managed properly.
- Authors are not responsible for any data loss or security breaches resulting from improper usage of the library.
- If you find a security vulnerability, please report it to us directly via email. For more details, please refer to the SECURITY.md file.
Contributions are greatly appreciated! If you have an idea for a new feature or have found a bug, we encourage you to get involved in this order:
-
Open/Report Issues or Ideas: If you encounter a problem, have an idea or a feature request, please open an issue on GitHub (FIRST!) . Be concise but include as much detail as necessary (environment, error messages, logs, steps to reproduce, etc.) so we can understand and address the issue and have a dialog.
-
Create Pull Requests: We welcome PRs! If you want to implement a new feature or fix a bug, feel free to submit a pull request to the latest
dev branch
. For major changes, it's a necessary to discuss your plans in an issue first! -
Lightweight Philosophy: When contributing, keep in mind that s3mini aims to remain lightweight and dependency-free. Please avoid adding heavy dependencies. New features should provide significant value to justify any increase in size.
-
Community Conduct: Be respectful and constructive in communications. We want a welcoming environment for all contributors. For more details, please refer to our CODE_OF_CONDUCT.md. No one reads it, but it's there for a reason.
If you figure out a solution to your question or problem on your own, please consider posting the answer or closing the issue with an explanation. It could help the next person who runs into the same thing!
This project is licensed under the MIT License - see the LICENSE.md file for details.
Developing and maintaining s3mini (and other open-source projects) requires time and effort. If you find this library useful, please consider sponsoring its development. Your support helps ensure I can continue improving s3mini and other projects. Thank you!