[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

OpenAI API Access

You need to acquire an OpenAI API key to use these procedures. Using them will incur costs on your OpenAI account. You can set the api key globally by defining the apoc.openai.key configuration in apoc.conf

But you can also use these procedures to call OpenAI-compatible APIs, which will therefore have their own API key (or even without API Key). See the paragraph OpenAI-compatible provider below.

All the following procedures can have the following APOC config, i.e. in apoc.conf or via docker env variable .Apoc configuration

key

description

default

apoc.ml.openai.type

"AZURE", "HUGGINGFACE", "OPENAI", indicates whether the API is Azure, HuggingFace, Anthropic or another one

"OPENAI"

apoc.ml.openai.url

the OpenAI endpoint base url

https://api.openai.com/v1 by default, https://api.anthropic.com/v1 if apoc.ml.openai.type=<ANTHROPIC>, or empty string if apoc.ml.openai.type=<AZURE OR HUGGINGFACE>

apoc.ml.azure.api.version

in case of apoc.ml.openai.type=AZURE, indicates the api-version to be passed after the ?api-version= url

""

Moreover, they can have the following configuration keys, as the last parameter. If present, they take precedence over the analogous APOC configs.

Table 1. Common configuration parameter

key

description

apiType

analogous to apoc.ml.openai.type APOC config

endpoint

analogous to apoc.ml.openai.url APOC config

apiVersion

analogous to apoc.ml.azure.api.version APOC config

path

To customize the url portion added to the base url (defined by the endpoint config). By default, is /embeddings, /completions and /chat/completions for respectively the apoc.ml.openai.embedding, apoc.ml.openai.completion and apoc.ml.openai.chat procedures.

jsonPath

To customize JSONPath of the response. The default is $ for the apoc.ml.openai.chat and apoc.ml.openai.completion procedures, and $.data for the apoc.ml.openai.embedding procedure.

failOnError

If true (default), the procedure fails in case of empty, blank or null input

enableBackOffRetries

If set to true, enables the backoff retry strategy for handling failures. (default: false)

backOffRetries

Sets the maximum number of retry attempts before the operation throws an exception. (default: 5)

exponentialBackoff

If set to true, applies an exponential progression to the wait time between retries. If set to false, the wait time increases linearly. (default: false)

Therefore, we can use the following procedures with the Open AI Services provided by Azure, pointing to the correct endpoints as explained in the documentation.

That is, if we want to call an endpoint like https://my-resource.openai.azure.com/openai/deployments/my-deployment-id/embeddings?api-version=my-api-version` for example, by passing as a configuration parameter:

    {endpoint: "https://my-resource.openai.azure.com/openai/deployments/my-deployment-id",
        apiVersion: my-api-version,
        apiType: 'AZURE'
}

The /embeddings portion will be added under-the-hood. Similarly, if we use the apoc.ml.openai.completion, if we want to call an endpoint like https://my-resource.openai.azure.com/openai/deployments/my-deployment-id/completions?api-version=my-api-version for example, we can write the same configuration parameter as above, where the /completions portion will be added.

While using the apoc.ml.openai.chat, with the same configuration, the url portion /chat/completions will be added

Or else, we can write this apoc.conf:

apoc.ml.openai.url=https://my-resource.openai.azure.com/openai/deployments/my-deployment-id
apoc.ml.azure.api.version=my-api-version
apoc.ml.openai.type=AZURE

Generate Embeddings API

This procedure apoc.ml.openai.embedding can take a list of text strings, and will return one row per string, with the embedding data as a 1536 element vector. It uses the /embeddings/create API which is documented here.

Additional configuration is passed to the API, the default model used is text-embedding-ada-002.

Generate Embeddings Call
CALL apoc.ml.openai.embedding(['Some Text'], $apiKey, {}) yield index, text, embedding;
Table 2. Generate Embeddings Response
index text embedding

0

"Some Text"

[-0.0065358975, -7.9563365E-4, …​. -0.010693862, -0.005087272]

Table 3. Parameters
name description

texts

List of text strings

apiKey

OpenAI API key

configuration

optional map for entries like model and other request parameters.

We can also pass a custom endpoint: <MyAndPointKey> entry (it takes precedence over the apoc.ml.openai.url config). The <MyAndPointKey> can be the complete andpoint (e.g. using Azure: https://my-resource.openai.azure.com/openai/deployments/my-deployment-id/chat/completions?api-version=my-api-version), or with a %s (e.g. using Azure: https://my-resource.openai.azure.com/openai/deployments/my-deployment-id/%s?api-version=my-api-version) which will eventually be replaced with embeddings, chat/completion and completion by using respectively the apoc.ml.openai.embedding, apoc.ml.openai.chat and apoc.ml.openai.completion.

Or an authType: `AUTH_TYPE, which can be authType: "BEARER" (default config.), to pass the apiKey via the header as an Authorization: Bearer $apiKey, or authType: "API_KEY" to pass the apiKey as an api-key: $apiKey header entry.

Table 4. Results
name description

index

index entry in original list

text

line of text from original list

embedding

1536 element floating point embedding vector for ada-002 model

Text Completion API

This procedure apoc.ml.openai.completion can continue/complete a given text.

It uses the /completions/create API which is documented here.

Additional configuration is passed to the API, the default model used is text-davinci-003.

Text Completion Call
CALL apoc.ml.openai.completion('What color is the sky? Answer in one word: ', $apiKey, {config}) yield value;
Text Completion Response
{ created=1684248202, model="text-davinci-003", id="cmpl-7GqBWwX49yMJljdmnLkWxYettZoOy",
  usage={completion_tokens=2, prompt_tokens=12, total_tokens=14},
  choices=[{finish_reason="stop", index=0, text="Blue", logprobs=null}], object="text_completion"}
Table 5. Parameters
name description

prompt

Text to complete

apiKey

OpenAI API key

configuration

optional map for entries like model, temperature, and other request parameters

Table 6. Results
name description

value

result entry from OpenAI (containing)

OpenLM API

We can also call the Completion API of HuggingFace and Cohere, similar to the OpenLM library, as below.

For the HuggingFace API, we have to define the config apiType: 'HUGGINGFACE', since we have to transform the body request.

For example:

CALL apoc.ml.openai.completion('[MASK] is the color of the sky', $huggingFaceApiKey,
{endpoint: 'https://api-inference.huggingface.co/models/google-bert/bert-base-uncased', apiType: 'HUGGINGFACE'})

With gpt2 or other text completion models the answers are not valid.

Or also, by using the Cohere API, where we have to define path: ''' not to add the /completions suffix to the URL:

CALL apoc.ml.openai.completion('What color is the sky? Answer in one word: ', $cohereApiKey,
{endpoint: 'https://api.cohere.ai/v1/generate', path: '', model: 'command'})

Chat Completion API

This procedure apoc.ml.openai.chat takes a list of maps of chat exchanges between assistant and user (with optional system message), and will return the next message in the flow.

It uses the /chat/create API which is documented here.

Additional configuration is passed to the API, the default model used is gpt-4o.

Chat Completion Call
CALL apoc.ml.openai.chat([
{role:"system", content:"Only answer with a single word"},
{role:"user", content:"What planet do humans live on?"}
],  $apiKey) yield value
Chat Completion Response
{created=1684248203, id="chatcmpl-7GqBXZr94avd4fluYDi2fWEz7DIHL",
object="chat.completion", model="gpt-3.5-turbo-0301",
usage={completion_tokens=2, prompt_tokens=26, total_tokens=28},
choices=[{finish_reason="stop", index=0, message={role="assistant", content="Earth."}}]}
Chat Completion Call with custom model
CALL apoc.ml.openai.chat([
{role:"user", content:"Which athletes won the gold medal in mixed doubles's curling at the 2022 Winter Olympics?"}
],  $apiKey, { model: "gpt-3.5-turbo" }) yield value
Chat Completion Response with custom model
{
  "created" : 1721902606,
  "usage" : {
    "total_tokens" : 59,
    "completion_tokens" : 32,
    "prompt_tokens" : 27
  },
  "model" : "gpt-3.5-turbo-2024-05-13",
  "id" : "chatcmpl-9opocM1gj9AMXIh7oSWWfoumJOTRC",
  "choices" : [ {
    "index" : 0,
    "finish_reason" : "stop",
    "message" : {
      "content" : "The gold medal in mixed doubles curling at the 2022 Winter Olympics was won by the Italian team, consisting of Stefania Constantini and Amos Mosaner.",
      "role" : "assistant"
    }
  } ],
  "system_fingerprint" : "fp_400f27fa1f",
  "object" : "chat.completion"
}
Table 7. Parameters
name description

messages

List of maps of instructions with {role:"assistant|user|system", content:"text}

apiKey

OpenAI API key

configuration

optional map for entries like model, temperature, and other request parameters

Table 8. Results
name description

value

result entry from OpenAI (containing created, id, model, object, usage(tokens), choices(message, index, finish_reason))

OpenAI-compatible provider

We can also use these procedures to call OpenAI-compatible APIs, by defining the endpoint config, and possibly the model, path and jsonPath configs.

For example, we can call the Anyscale Endpoints:

CALL apoc.ml.openai.embedding(['Some Text'], $anyScaleApiKey,
{endpoint: 'https://api.endpoints.anyscale.com/v1', model: 'thenlper/gte-large'})

Or via LocalAI APIs (note that the apiKey is null by default):

CALL apoc.ml.openai.embedding(['Some Text'], "ignored",
{endpoint: 'http://localhost:8080/v1', model: 'text-embedding-ada-002'})

We can use tomasonjo models to generate Cypher from text:

WITH 'Node properties are the following:
Movie {title: STRING, votes: INTEGER, tagline: STRING, released: INTEGER}, Person {born: INTEGER, name: STRING}
Relationship properties are the following:
ACTED_IN {roles: LIST}, REVIEWED {summary: STRING, rating: INTEGER}
The relationships are the following:
(:Person)-[:ACTED_IN]->(:Movie), (:Person)-[:DIRECTED]->(:Movie), (:Person)-[:PRODUCED]->(:Movie), (:Person)-[:WROTE]->(:Movie), (:Person)-[:FOLLOWS]->(:Person), (:Person)-[:REVIEWED]->(:Movie)'
as schema,
'Which actors played in the most movies?' as question
CALL apoc.ml.openai.chat([
            {role:"system", content:"Given an input question, convert it to a Cypher query. No pre-amble."},
            {role:"user", content:"Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question:
\n "+ schema +" \n\n Question: "+ question +" \n Cypher query:"}
            ], '<apiKey>', { endpoint: 'http://localhost:8080/chat/completions', model: 'text2cypher-demo-4bit-gguf-unsloth.Q4_K_M.gguf'})
YIELD value RETURN value

Or also, by using LLMatic Library:

CALL apoc.ml.openai.embedding(['Some Text'], "ignored",
{endpoint: 'http://localhost:3000/v1', model: 'thenlper/gte-large'})

Furthermore, we can use the Groq API, e.g.:

CALL apoc.ml.openai.chat([{"role": "user", "content": "Explain the importance of low latency LLMs"}],
    '<apiKey>',
    {endpoint: 'https://api.groq.com/openai/v1', model: 'mixtral-8x7b-32768'})

Anthropic API (OpenAI-compatible)

Another alternative is to use the Anthropic API.

We can use the apoc.ml.openai.chat procedure to leverage the Anthropic Messages API.

These are the default key-value parameters that will be included in the body request, if not specified:

Table 9. Default Anthropic key-value parameters
key value

max_tokens

1000

model

"claude-3-5-sonnet-20240620"

For example:

CALL apoc.ml.openai.chat([
      { content: "What planet do humans live on?", role: "user" },
      { content: "Only answer with a single word", role: "assistant" }
    ],
    $anthropicApiKey,
    {apiType: 'ANTHROPIC'}
)
Table 10. Example result
value

{"id": "msg_01NUvsajthuiqRXKJyfs4nBE", "content": [{"text": " in lowercase: What planet do humans live on?", type: "text"}], "model": "claude-3-5-sonnet-20240620", "role": "assistant", "usage": {"output_tokens": 13, input_tokens: 20}, "stop_reason": "end_turn", "stop_sequence": null, "type": "message" }

Moreover, we can define the Anthropic API Version via the anthropic-version config parameter, e.g.:

CALL apoc.ml.openai.chat([
      { content: "What planet do humans live on?", role: "user" }
    ],
    $anthropicApiKey,
    {apiType: 'ANTHROPIC', `anthropic-version`: "2023-06-01"}
)

with a result similar to above.

Additionally, we can specify a Base64 image to include in the body, e.g.:

CALL apoc.ml.openai.chat([
      { role: "user", content: [
        {type: "image", source: {type: "base64",
            media_type: "image/jpeg",
            data: "<theBase64ImageOfAPizza>"} }
      ]
    }
    ],
    $anthropicApiKey,
    {apiType: 'ANTHROPIC'}
)
Table 11. Example result
value

{"id": "msg_01NxAth45myf36njuh1qwxfM", "content": [{ "text": "This image shows a pizza…​..", "type": "text" } ], "model": "claude-3-5-sonnet-20240620", "role": "assistant", "usage": { "output_tokens": 202, "input_tokens": 192 }, "stop_reason": "end_turn", "stop_sequence": null, "type": "message" }

We can also specify other custom body requests, like the max_tokens value, to be included in the config parameter:

CALL apoc.ml.openai.chat([
      { content: "What planet do humans live on?", role: "user" }
    ],
    $anthropicApiKey,
    {apiType: 'ANTHROPIC', max_tokens: 2}
)
Table 12. Example result
value

{ "id": "msg_01HxQbBuPc9xxBDSBc5iWw2P", "content": [ { text": "Hearth", "type": "text" } ], "model": "claude-3-5-sonnet-20240620", "role": "assistant", "usage": { "output_tokens": 10, "input_tokens": 20 }, "stop_reason": "max_tokens", "stop_sequence": null, "type": "message" }

Also, we can use the apoc.ml.openai.completion procedure to leverage the Anthropic Complete API.

These are the default key-value parameters that will be included in the body request, if not specified:

Table 13. Default Anthropic key-value parameters
key value

max_tokens_to_sample

1000

model

"claude-2.1"

For example:

CALL apoc.ml.openai.completion('\n\nHuman: What color is sky?\n\nAssistant:',
    $anthropicApiKey,
    {apiType: 'ANTHROPIC'}
)
Table 14. Example result
value

{ "id": "compl_016JGWzFfBQCVWQ8vkoDsdL3", "stop": "Human:", "model": "claude-2.1", "stop_reason": "stop_sequence", "type": "completion", "completion": " The sky appears blue on a clear day. This is due to how air molecules in Earth’s atmosphere scatter sunlight. Shorter wavelengths of light like blue and violet are scattered more, making the sky appear blue to our eyes.", "log_id": "compl_016JGWzFfBQCVWQ8vkoDsdL3" }

Moreover, we can specify other custom body requests, like the max_tokens_to_sample value, to be included in the config parameter:

CALL apoc.ml.openai.completion('\n\nHuman: What color is sky?\n\nAssistant:',
    $anthropicApiKey,
    {apiType: 'ANTHROPIC', max_tokens_to_sample: 3}
)
Table 15. Example result
value

{ "id": "compl_015yzL9jDdMQnLSN3jkQifZt", "stop": null, "model": "claude-2.1", "stop_reason": "max_tokens", "type": "completion", "completion": " The sky is", "log_id": "compl_015yzL9jDdMQnLSN3jkQifZt" }

And also, we can specify the API version via anthropic-version configuration parameter, like the above example with the apoc.ml.openai.chat procedure.

At the moment Anthropic does not support embedding API.

And at the time, payload with stream: true is not supported, since the result of apoc.ml.openai must be a JSON.