Using Azure OpenAI Service with Local bolt.new
bolt.new is an AI-powered full-stack web development platform. It can also be run locally from a GitHub repository.
While bolt.new uses Anthropic Claude 3.5 Sonnet by default, this time, we’ll modify it to work with Azure.
Implementing Azure
When making code modifications, the original code will be retained as comments.
Below, only the modified sections of the code are shown, while unchanged original code is abbreviated with “…”.
Adding Libraries
First, add the necessary library.
In bolt.new/package.json
, include the @ai-sdk/azure
library as follows:
{
...
"dependencies": {
"@ai-sdk/anthropic": "^0.0.39",
"@ai-sdk/azure": "^1.0.5", // <- Added
...
},
...
}
Setting Azure Environment Variables
First, add the Azure resource name, API key, and deployment name to the bolt.new/.env.local
file:
...
AZURE_RESOURCE_NAME=YOUR_RESOURCE_NAME
AZURE_API_KEY=YOUR_API_KEY
AZURE_DEPLOYMENT_NAME=YOUR_DEPLOYMENT_NAME
The resource name refers to the ${resourceName}
part in the Azure OpenAI Endpoint URL:
https://${resourceName}.openai.azure.com/
.
Note that the deployment name may differ from the model name, so be cautious.
Next, let’s call the environment variables.
Add the following code to bolt.new/app/lib/.server/llm/apt-key.ts
:
...
export function getAzureEnv(cloudflareEnv?: Env) {
// Cloudflare Pages environment
if (cloudflareEnv) {
return {
resourceName: cloudflareEnv.AZURE_RESOURCE_NAME,
apiKey: cloudflareEnv.AZURE_API_KEY,
deploymentName: cloudflareEnv.AZURE_DEPLOYMENT_NAME,
};
}
// Local development environment
return {
resourceName: env.AZURE_RESOURCE_NAME || '',
apiKey: env.AZURE_API_KEY || '',
deploymentName: env.AZURE_DEPLOYMENT_NAME || '',
};
}
When running locally with pnpm
, the environment variables are accessed via cloudflareEnv
. For apps deployed to Azure or similar platforms, the environment variables are registered and accessed through env
.
Using Azure OpenAI
To use Azure OpenAI, incorporate the configured environment variables.
Add the following code to bolt.new/app/lib/.server/llm/models.ts
:
...
import { createAzure } from '@ai-sdk/azure';
...
export function getAzureModel(resourceName: string, apiKey: string, deploymentName: string) {
const azure = createAzure({
resourceName: resourceName,
apiKey: apiKey,
});
return azure(deploymentName);
}
When switching to Azure OpenAI, it is necessary to confirm the max tokens
setting.
Update bolt.new/app/lib/.server/llm/constants.ts
as follows:
// see https://docs.anthropic.com/en/docs/about-claude/models
// export const MAX_TOKENS = 8192;
// see https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=python-secure%2Cglobal-standard%2Cstandard-chat-completions#gpt-4o-and-gpt-4-turbo
export const MAX_TOKENS = 16384;
The maximum token count depends on the model version deployed on Azure OpenAI Service.
For example, according to Azure OpenAI Models, the GPT-4o version from 2024-08-06 supports up to 16,384 output tokens, while the 2024-05-13 version supports only 4,096.
Next, update the model calling code.
Modify bolt.new/app/lib/.server/llm/stream-text.ts
as follows, keeping the code for Anthropic as comments:
...
// import { getAPIKey } from '~/lib/.server/llm/api-key';
import { getAzureEnv } from '~/lib/.server/llm/api-key';
// import { getAnthropicModel } from '~/lib/.server/llm/model';
import { getAzureModel } from '~/lib/.server/llm/model';
...
// export function streamText(messages: Messages, env: Env, options?: StreamingOptions) {
// return _streamText({
// model: getAnthropicModel(getAPIKey(env)),
// system: getSystemPrompt(),
// maxTokens: MAX_TOKENS,
// headers: {
// 'anthropic-beta': 'max-tokens-3-5-sonnet-2024-07-15',
// },
// messages: convertToCoreMessages(messages),
// ...options,
// });
// }
export function streamText(messages: Messages, env: Env, options?: StreamingOptions) {
const { resourceName, apiKey, deploymentName } = getAzureEnv(env);
if (!resourceName || !apiKey || !deploymentName) {
throw new Error('Azure OpenAI credentials are not properly configured');
}
const stream = _streamText({
model: getAzureModel(resourceName, apiKey, deploymentName),
system: getSystemPrompt(),
maxTokens: MAX_TOKENS,
headers: {},
messages: convertToCoreMessages(messages),
...options,
});
return stream;
}
Execution
Once the modifications are complete, execute the following commands to start bolt.new
:
pnpm install
pnpm run build
pnpm run start
Please note that depending on your Azure OpenAI Service plan, you may encounter rate limits.
コメント
コメントを投稿