8000 support download Lora Model from ModelScope and download private mode… by AlphaINF · Pull Request #10762 · vllm-project/vllm · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support download Lora Model from ModelScope and download private mode… #10762

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

AlphaINF
Copy link
@AlphaINF AlphaINF commented Nov 29, 2024

Currently, vllm can't loading Lora models from modelscope.
Beside, If a model from modelscope is private, it can't load from modelscope, have to login manually before starting vllm.
this pull request fix two problems below.
If you want to using private models, you can set MODELSCOPE_ACCESS_TOKEN as environmental variables

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@AlphaINF
Copy link
Author

/ready

@jeejeelee
Copy link
Collaborator

IMHO, downloading LoRA weights from remote servers should not be handled internally by vllm, as this would increase the risk of service crashes and impact performance. What are your thoughts on this? cc @simon-mo @youkaichao @DarkLight1337

@AlphaINF
Copy link
Author

IMHO, downloading LoRA weights from remote servers should not be handled internally by vllm, as this would increase the risk of service crashes and impact performance. What are your thoughts on this? cc @simon-mo @youkaichao @DarkLight1337

Indeed, if the download path is incorrect or some internet errors occured, it will crash the server(eg. the lora from the internet is lack of some files). However, we can add additional try-except or add download instruction when starting vllm or interacting with /upload_lora .

Actually, vllm now support downloading LoRA weight from huggingface, and it doesn't matter.

@jeejeelee
Copy link
Collaborator

@AlphaINF I know, so I want to hear their thoughts and suggestions.
BTW I knew you would say to support HuggingFace downloads, then our other users might also say they want to support downloads from S3...

@DarkLight1337
Copy link
Member

Not sure about dynamic LoRA adapters (I don't have context on this), but since we currently support downloading models from both HF and ModelScope, it makes sense to enable this for LoRA as well.

@AlphaINF
Copy link
Author

@jeejeelee Through some days trial, your concern is right!
I found several bugs in it.
One of the bug is the download bug. For example, If we deploy the main model on four gpus, when the user send a message, the vllm will start four download threads(as much as the num of threads), multiple download threads which downloading the same lora adapter will finally cause the failure about the download.
What's even worse is that receiving multiple requests, vllm will start multiple download threads.
So If we want to load mora model from remote path, we need to move the snapshot_download to the add_lora.

8000

@jeejeelee
Copy link
Collaborator

@jeejeelee Through some days trial, your concern is right! I found several bugs in it. One of the bug is the download bug. For example, If we deploy the main model on four gpus, when the user send a message, the vllm will start four download threads(as much as the num of threads), multiple download threads which downloading the same lora adapter will finally cause the failure about the download. What's even worse is that receiving multiple requests, vllm will start multiple download threads. So If we want to load mora model from remote path, we need to move the snapshot_download to the add_lora.

Have you tried using filelock?
BTW, @Jeffwan have you encountered similar issues?

@AlphaINF
Copy link
Author

@jeejeelee Through some days trial, your concern is right! I found several bugs in it. One of the bug is the download bug. For example, If we deploy the main model on four gpus, when the user send a message, the vllm will start four download threads(as much as the num of threads), multiple download threads which downloading the same lora adapter will finally cause the failure about the download. What's even worse is that receiving multiple requests, vllm will start multiple download threads. So If we want to load mora model from remote path, we need to move the snapshot_download to the add_lora.

Have you tried using filelock? BTW, @Jeffwan have you encountered similar issues?

That's a great idea, I will try to add it and test for some days!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0