8000 bump 0.2.0 by vince62s · Pull Request #227 · eole-nlp/eole · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

bump 0.2.0 #227

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,16 @@
# Changelog

This is just a centralised version of the Github automatically generated Release changelogs.
## 0.2.0

* Fix docs build/deploy by @francoishernandez in https://github.com/eole-nlp/eole/pull/216
* Enable HF nllb conversion by @francoishernandez in https://github.com/eole-nlp/eole/pull/204
* Introduce pure BF16 training with Kaha summation - (torch-optimi package) by @vince62s in https://github.com/eole-nlp/eole/pull/213
* Ensure unicode support, strip carriage returns from vocab by @ArtanisTheOne in https://github.com/eole-nlp/eole/pull/215
* Recipe to train estimator for Eurollm by @vince62s in https://github.com/eole-nlp/eole/pull/219
* Support Mistral-3.1-24B by @vince62s in https://github.com/eole-nlp/eole/pull/220
* Fix typo in wmt17 readme configuration names by @francoishernandez in https://github.com/eole-nlp/eole/pull/224
* better lora merging + fixes by @vince62s in https://github.com/eole-nlp/eole/pull/226

## 0.1.2

Expand Down
7 changes: 4 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ Our goal is to provide a comprehensive yet compact and modular codebase for expe

## Latest developments

- **Mistral-3.1-24B-instruct** support (text and image input)
- **Pure-BF16 Training** thanks to [Kahan Summation](https://arxiv.org/pdf/2010.06192) implemented [here](https://optimi.benjaminwarner.dev/kahan_summation/)
- **Web-based (Google translator-like) interface** featuring the latest EuroLLM-8B-Instruct LLM: read more [here](https://github.com/eole-nlp/eole/tree/main/recipes/eurollm)
- **Estimator layer** which enables to rescore multiple beams in the same model. Read article [here](https://medium.com/p/05b00b271a47) and [here](https://medium.com/p/7dccfe167814)
Expand Down Expand Up @@ -62,12 +63,12 @@ You can customize the workflow and build your own images based on specific needs

To pull the Docker image:
```bash
docker pull ghcr.io/eole-nlp/eole:0.1.2-torch2.5.1-ubuntu22.04-cuda12.4
docker pull ghcr.io/eole-nlp/eole:0.2.0-torch2.6.0-ubuntu22.04-cuda12.6
```

Example one-liner to run a container and open a bash shell within it:
```bash
docker run --rm -it --runtime=nvidia ghcr.io/eole-nlp/eole:0.1.2-torch2.5.1-ubuntu22.04-cuda12.4
docker run --rm -it --runtime=nvidia ghcr.io/eole-nlp/eole:0.2.0-torch2.6.0-ubuntu22.04-cuda12.6
```

> **Note**: Ensure you have the [Nvidia Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) (formerly nvidia-docker) installed to take advantage of CUDA/GPU features.
Expand All @@ -82,7 +83,7 @@ Depending on your needs, you can add various flags:
#### Requirements

- Python >= 3.10
- PyTorch >= 2.5 < 2.6
- PyTorch >= 2.5 < 2.8

#### Installation from Source

Expand Down
4 changes: 2 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
description="Open language modeling toolkit based on PyTorch",
long_description=long_description,
long_description_content_type="text/markdown",
version="0.1.2",
version="0.2.0",
packages=find_packages(),
project_urls={
"Source": "https://github.com/eole-nlp/eole/",
Expand Down Expand Up @@ -39,7 +39,7 @@
"spacy",
"subword-nmt>=0.3.7",
"tensorboard>=2.3",
"torch>=2.5,<2.7",
"torch>=2.5,<2.8",
"torch-optimi",
"uvicorn",
"waitress",
Expand Down
0