The ultimate PyQt6 application featuring the power of OpenAI, Google Gemini, Claude, and various open-source AI models.
It delivers outstanding capabilities for Chat, Image, Vision, Text-To-Speech(TTS) and Speech-To-Text(STT).
Before you begin, ensure you have met the following requirements:
-
Python:
Make sure you have Python 3.10 or later installed. You can download it from the official Python website.
python --version
-
pip:
Ensure you have pip installed, which is the package installer for Python.
-
Git:
Ensure you have Git installed for version control. You can download it from the official Git website.
-
Virtual Environment:
It is recommended to use a virtual environment to manage your project dependencies.
You can create a virtual environment using venv:
python -m venv venv
source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
IDE/Code Editor:
Use an IDE or code editor of your choice. Popular options include PyCharm, VSCode, and Eclipse.
-
PlantUML:
PlantUML is used for generating UML diagrams.
Download PlantUML from the official PlantUML website or PyCharm plugin, Xcode extension.
- Clone repository
git clone https://github.com/hyun-yang/MyChatGPT.git
- With pip:
pip install -r requirements.txt
Or virtual environment(venv), use this command
python -m pip install -r requirements.txt
- Run main.py
python main.py
-
Configure API Key
- Open 'Setting' menu and set API key.
- For Ollama, you can use any key and need to install Ollama and download model you want to use.
-
Re-run main.py
python main.py
- Python version >= 3.10
- PyQt6
- API Key (OpenAI, Google Gemini, Claude)
- Support OpenAI, Google Gemini, Claude
- Support Open-source AI models using Ollama library
- Support Chat, Image, Vision, TTS, and STT generation
Claude and Ollama currently do not have a method to retrieve the list of supported models, so you need to open the settings.ini file and add them manually as shown below.
If you are using Ollama, make sure to check the following three things:
- Install Ollama.
- Download the model you wish to use.
- Open the settings.ini file and add the name of the model.
Open 'settings.ini' file then add model list.
...
[Claude_Model_List]
claude-3-5-sonnet-20240620=true
claude-3-opus-20240229=true
claude-3-sonnet-20240229=true
claude-3-haiku-20240307=true
[Ollama_Model_List]
llama3.1=true
gemma2=true
gemma2:27b=true
codegemma=true
...
pyinstaller --add-data "ico/*.svg:ico" --add-data "ico/*.png:ico" --add-data "splash/pyqt-small.png:splash" --icon="ico/app.ico" --windowed --onefile main.py
After generating the executable file on Windows, if the icon of the executable is not displayed correctly, please rename the executable to ensure that the icon is displayed properly.
- First Run
-
Ollama Model List (You need to manually add models and make sure to download the model you wish to use beforehand)
If you encounter the error message below while running/debugging the program on the Ubuntu operating system, please resolve it as described in the [Fix] section
qt.qpa.plugin: From 6.5.0, xcb-cursor0 or libxcb-cursor0 is needed to load the Qt xcb platform plugin.
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized.
Reinstalling the application may fix this problem.
Install following library then re-run it
sudo apt-get install -y libxcb-cursor-dev
Use pyinstaller==6.5.0
Refer requirements.txt
PyQT6.5.X fails with to start on macOS (segmentation fault)
Segment fault when packed with pyinstaller on linux
check_gcp_environment_no_op.cc:29] ALTS: Platforms other than Linux and Windows are not supported issue - Mac
Use grpcio==1.64.1
Refer requirements.txt
Suppress logs with google.cloud.workflows client instantiation
Distributed under the MIT License.