Using Gemini with the OpenAI Library
Based on this article, we can now use Gemini with the OpenAI Library. So, I decided to give it a try in this article
Currently, only the Chat Completion API and Embedding API are available.
In this article, I tried using both Python and JavaScript.
Python
First, let’s set up the environment.
pip install openai python-dotenv
Next, let’s run the following code.
import os
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
client = OpenAI(
api_key=GOOGLE_API_KEY,
base_url="https://generativelanguage.googleapis.com/v1beta/"
)
response = client.chat.completions.create(
model="gemini-1.5-flash",
n=1,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": "Explain briefly(less than 30 words) to me how AI works."
}
]
)
print(response.choices[0].message.content)
The following response was returned.
AI mimics human intelligence by learning patterns from data, using algorithms to solve problems and make decisions.
In the content field, you can specify either a string or ‘type’: ‘text’.
import os
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
client = OpenAI(
api_key=GOOGLE_API_KEY,
base_url="https://generativelanguage.googleapis.com/v1beta/"
)
response = client.chat.completions.create(
model="gemini-1.5-flash",
n=1,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Explain briefly(less than 30 words) to me how AI works.",
},
]
}
]
)
print(response.choices[0].message.content)
However, errors occurred with image and audio inputs.
Sample code for image input
import os
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
client = OpenAI(
api_key=GOOGLE_API_KEY,
base_url="https://generativelanguage.googleapis.com/v1beta/"
)
# png to base64 text
import base64
with open("test.png", "rb") as image:
b64str = base64.b64encode(image.read()).decode("utf-8")
response = client.chat.completions.create(
model="gemini-1.5-flash",
# model="gpt-4o",
n=1,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe the image in the image below.",
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/png;base64,{b64str}"
}
}
]
}
]
)
print(response.choices[0].message.content)
Sample code for audio input
import os
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
client = OpenAI(
api_key=GOOGLE_API_KEY,
base_url="https://generativelanguage.googleapis.com/v1beta/"
)
# png to base64 text
import base64
with open("test.wav", "rb") as audio:
b64str = base64.b64encode(audio.read()).decode("utf-8")
response = client.chat.completions.create(
model="gemini-1.5-flash",
# model="gpt-4o-audio-preview",
n=1,
modalities=["text"],
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": [
{
"type": "text",
"text": "What does he say?",
},
{
"type": "input_audio",
"input_audio": {
"data": b64str,
"format": "wav",
}
}
]
}
]
)
print(response.choices[0].message.content)
The following error response was returned.
openai.BadRequestError: Error code: 400 - [{'error': {'code': 400, 'message': 'Request contains an invalid argument.', 'status': 'INVALID_ARGUMENT'}}]
Currently, only text input is supported, but it seems that image and audio inputs will be available in the future.
JavaScript
Let’s take a look at the JavaScript sample code.
First, let’s set up the environment.
npm init -y
npm install openai
npm pkg set type=module
Next, let’s run the following code.
import OpenAI from "openai";
const GOOGLE_API_KEY = process.env.GOOGLE_API_KEY;
const openai = new OpenAI({
apiKey: GOOGLE_API_KEY,
baseURL: "https://generativelanguage.googleapis.com/v1beta/"
});
const response = await openai.chat.completions.create({
model: "gemini-1.5-flash",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{
role: "user",
content: "Explain briefly(less than 30 words) to me how AI works",
},
],
});
console.log(response.choices[0].message.content);
When running the code, make sure to include the API key in the .env file. The .env file will be loaded at runtime.
node --env-file=.env run.js
The following response was returned.
AI systems learn from data, identify patterns, and make predictions or decisions based on those patterns.
It’s great that we can use other models within the same library.
Personally, I’m happy about this because OpenAI makes it easier to edit conversation history.
コメント
コメントを投稿