8000 Consider Adding Optional OpenAI Configuration Options · Issue #498 · gitroomhq/postiz-app · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Consider Adding Optional OpenAI Configuration Options #498

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
timczm opened this issue Dec 27, 2024 · 3 comments
Open
1 task done

Consider Adding Optional OpenAI Configuration Options #498

timczm opened this issue Dec 27, 2024 · 3 comments
Assignees
Labels
type: feature-request New feature or request

Comments

@timczm
Copy link
timczm commented Dec 27, 2024

🔖 Feature description

I’ve noticed that the project already includes the configuration option OPENAI_API_KEY. In addition to this, I believe it would be worth considering the addition of more OpenAI-related configuration options(OPENAI_BASE_URL, SMART_LLM, FAST_LLM), which could integrate with other endpoints (such as OpenRouter).

🎤 Why is this feature needed ?

For OPENAI_BASE_URL

There’s no need to tie AI functionality to a specific provider (in this case, OpenAI). In fact, many providers are compatible with OpenAI's SDK, and I hope it will be possible to switch to other providers such as Claude, Gemini, Qwen, and so on.

For SMART_LLM & FAST_LLM

By distinguishing between SMART_LLM and FAST_LLM, a simple way to differentiate the costs of using AI can be implemented.

✌️ How do you aim to achieve this?

An example of using OpenRouter:

# .env

OPENAI_BASE_URL="https://openrouter.ai/api/v1"
SMART_LLM="openai/gpt-4o"
FAST_LLM="openai/gpt-4o-mini"
# libraries/nestjs-libraries/src/openai/openai.service.ts

import OpenAI from 'openai';

const openai = new OpenAI({
  baseURL: process.env.OPENAI_BASE_URL,
  apiKey: process.env.OPENAI_API_KEY || 'sk-proj-',
})

const completion = await openai.chat.completions.create({
  model: process.env.SMART_LLM || 'gpt-4o',
  messages: [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "Who are you?"
        }
      ]
    }
  ]
})
console.log(completion.choices[0].message)

🔄️ Additional Information

No response

👀 Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

@timczm timczm added the type: feature-request New feature or request label Dec 27, 2024
@Decoder76
Copy link

HI , I am excited to join your team.

@gitroomhq gitroomhq deleted a comment from Azadbangladeshi-com Jan 6, 2025
@gitroomhq gitroomhq deleted a comment from Azadbangladeshi-com Jan 6, 2025
@gitroomhq gitroomhq deleted a comment from iaskgithub Jan 13, 2025
@uwzis
Copy link
uwzis commented Mar 26, 2025

I have looked into the openai.service.ts and found it wouldn't be hard to implement. When I get some time I will try testing these changes. Any help would be appreciated in testing for potential issues.

@egelhaus egelhaus marked this as a duplicate of #592 May 5, 2025
@egelhaus
Copy link
Collaborator
egelhaus commented May 5, 2025

Any update @uwzis ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: feature-request New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants
0