8000 ❗ Enable LLM fine-tuning tests when no quantization is specified by arnavgarg1 · Pull Request #3626 · ludwig-ai/ludwig · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

❗ Enable LLM fine-tuning tests when no quantization is specified #3626

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Sep 21, 2023

Conversation

arnavgarg1
Copy link
Contributor
@arnavgarg1 arnavgarg1 commented Sep 17, 2023

All of our fine-tuning tests were getting skipped because our check for cuda was too strict, we only wanted to do that if a quantization_config was specified. This was missed because the tests were too complicated and had a lot of logic in them.

This PR refactors our existing LLM fine-tuning tests by splitting them into two:

  1. Without Quantization
  2. With Quantization

All of the without quantization tests will run going forward, and the quantization tests will be skipped if no GPUs are available.

What's included in the non-quantization tests (FYI, none of these were running until now, they were all getting incorrectly skipped):

  1. Full Fine-tuning
  2. LoRA fine-tuning using defaults
  3. LoRA fine-tuning using modified parameters (NEW)
  4. AdaLora fine-tuning using defaults (NEW)
  5. AdaLoRA fine-tuning using modified parameters (NEW)
  6. Adaption Prompt using defaults
  7. Adaption prompt using modified parameters (NEW)

It also sets the pytest mark once at the top of the file since we were not being consistent with adding this marker throughout.

@github-actions
Copy link
github-actions bot commented Sep 17, 2023

Unit Test Results

  4 files  ±0    4 suites  ±0   30m 56s ⏱️ - 3m 47s
31 tests ±0  26 ✔️ ±0    5 💤 ±0  0 ±0 
62 runs  ±0  52 ✔️ ±0  10 💤 ±0  0 ±0 

Results for commit 64f24ed. ± Comparison against base commit d095af8.

♻️ This comment has been updated with latest results.

@arnavgarg1 arnavgarg1 changed the title Enable LLM fine-tuning tests when no quantization is specified ❗ Enable LLM fine-tuning tests when no quantization is specified Sep 21, 2023
@arnavgarg1 arnavgarg1 merged commit f69f9e2 into master Sep 21, 2023
@arnavgarg1 arnavgarg1 deleted the finetunining-tests branch September 21, 2023 07:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0