Welcome to oobabooga-macOS Discussions! #1
Replies: 3 comments 6 replies
-
this is awesome, I nearly seeking a workable demo to build up cmake and run Llama2-70 with Silicon chip, this repo helps a lto! |
Beta Was this translation helpful? Give feedback.
-
What is the status of this project ? is it still active ? It would be amazing to have an easy to install macos version of oobabooga optimized for apple silicon that can run gptq and awq models. are there any plans to merge this project back with the main oobabooga development branch ? |
Beta Was this translation helpful? Give feedback.
-
Thanks for your interest. I've been hacking around on this on my local repository for a while off and on with several other things I am working on, but can make this a priority. Life has happened and I haven't been able to push out things as much as I'd like and need to get after the things I have to push. I'm in the process of getting things updated, and will push them in the next couple of days. I'm a team of one right now and could use some help testing and prioritizing things here. I have a new build order and will try to get that updated shortly. It seems the PyPi pre-compiled wheels are not optimized and a local recompile is necessary to fully take advantage of the Apple Silicon M series processors. I'll have that updated shortly as well. I just want to do a few more tests to make sure I can get a consistent build and performance out of it. Some other support packages seem to be an issue too, such as Cmake. I had to update my local Cmake in order to get llama-cpp-python to rebuild properly. I can get everything up in pieces into a dev or test branch for people to test, I haven't made all my changes compatible with the original oobabooga, so if you want to run my version, it will likely not work on CUDA and I have concentrated specifically on Apple Silicon since that's what I use. I've also hacked up Coqui/XTTS to run on Apple Silicon a bit faster than it does using their repo and just CPU, but that's a work in progress. I have several other items in progress like getting all talk working as well. Things I have tested and made some changes to are: Core
Extras
Building and growing a community around this optimized for Apple Silicon was my goal initially, and am still willing to support this. I started down this road less than a year ago, and have made quite a bit of progress just working it out myself. So additional assistance is more than welcome. Would love to hear your thoughts. Be well... |
Beta Was this translation helpful? Give feedback.
-
👋 Welcome!
We’re using Discussions as a place to connect with other members of our community. We hope that you:
build together 💪.
To get started, comment below with an introduction of yourself and tell us about what you do with this community.
I am seeking ideas as to what areas I could concentrate efforts next, so if anyone would like to help decide what's important to Apple Silicon users who use AI, Data Analytics, Machine Learning, and other tools to be optimized for the Apple Silicon M1/M2 GPU, this is the place.
In the future, some of this may be combined into my fork of oobabooga/text-geneartion-webui and the more general bits put in a new repository describing how to setup all the Python tools for AI, Data Analytics, and Scientific computing optimized for Apple Silicon in one place for all.
Thank you to everyone who helps out making the information contained here "less wrong." (sorry Ludkowski, couldn't help it)
Beta Was this translation helpful? Give feedback.
All reactions