投稿

11月, 2024の投稿を表示しています

Using Azure OpenAI Service with Local bolt.new

Using Azure OpenAI Service with Local bolt.new bolt.new is an AI-powered full-stack web development platform. It can also be run locally from a GitHub repository. While bolt.new uses Anthropic Claude 3.5 Sonnet by default, this time, we’ll modify it to work with Azure. Implementing Azure When making code modifications, the original code will be retained as comments. Below, only the modified sections of the code are shown, while unchanged original code is abbreviated with “…”. Adding Libraries First, add the necessary library. In bolt.new/package.json , include the @ai-sdk/azure library as follows: { ... "dependencies" : { "@ai-sdk/anthropic" : "^0.0.39" , "@ai-sdk/azure" : "^1.0.5" , // <- Added ... } , ... } Setting Azure Environment Variables First, add the Azure resource name, API key, and deployment name to the bolt.new/.env.local file: ... AZURE_RESOURCE_NAME=Y

Enabling Application Downloads in Local bolt.new

イメージ
Enabling Application Downloads in Local bolt.new In this article, I will modify bolt.new to allow applications created in the tool to be downloaded locally. This feature will facilitate internal deployment of bolt.new applications, making it particularly useful for corporate environments. Objective Add functionality to download the project files as a ZIP archive. Steps to Implement Integrate a download button in the interface Add a download button in the sidebar or toolbar. Generate a ZIP archive of the project Use a library like JSZip to bundle project files into a ZIP archive. Download the ZIP archive Trigger the browser’s download functionality with the generated ZIP file. Test the feature Ensure that the downloaded ZIP contains the expected files and directory structure. In the next article, we will cover how to modify bolt.new to integrate with Azure OpenAI Service, streamlining the application for enterprise-level use cases. Pl

Modify the local bolt.new interface to allow input of the API key

イメージ
Modify the local bolt.new interface to allow input of the API key In bolt.new , the API key can be configured using environment variables, but this time, we will modify it to allow input of the API key directly from the interface. Modification Details Sidebar We will enable API key input directly from the sidebar. In the sidebar, which currently displays chat history, we add a new form at the top for entering the API key. To achieve this, modify the file bolt.new/app/components/sidebar/Menu.client.tsx . First, import the function to handle API key input: import { ApiKeyInput } from '~/components/sidebar/ApiKeyInput'; The bolt.new/app/components/sidebar/ApiKeyInput.tsx file will be created later. Next, add a form for entering the API key within the menu. ... return ( <motion.div ref={menuRef} initial="closed" animate={open ? 'open' : 'closed'} variants={menuVariants

Running bolt.new Locally

イメージ
Running bolt.new Locally Running bolt.new Locally What is bolt.new? bolt.new is an open-source tool for creating web applications. While there is also a web version , this article focuses on how to run it locally. Environment Setup This article will guide you through the steps to run it using WSL (Windows Subsystem for Linux). Installing pnpm First, let’s install pnpm. sudo apt update suto apt install nodejs npm sudo npm -g install n sudo n stable sudo apt purge nodejs npm sudo apt autoremove sudo apt update sudo apt upgrade After restarting the terminal, check if pnpm has been installed successfully. node --version npm --version Downloading bolt.new Next, refer to bolt.new/CONTRIBUTING.md to download the environment from GitHub. git clone https://github.com/stackblitz/bolt.new.git Install the libraries on WSL. cd bolt.new pnpm install Create a .env.local file under the bolt.new folder and specify the Anthropic API key in i

Using Gemini with the OpenAI Library

イメージ
Using Gemini with the OpenAI Library Based on this article , we can now use Gemini with the OpenAI Library. So, I decided to give it a try in this article Currently, only the Chat Completion API and Embedding API are available. In this article, I tried using both Python and JavaScript. Python First, let’s set up the environment. pip install openai python-dotenv Next, let’s run the following code. import os from dotenv import load_dotenv from openai import OpenAI load_dotenv ( ) GOOGLE_API_KEY = os . getenv ( "GOOGLE_API_KEY" ) client = OpenAI ( api_key = GOOGLE_API_KEY , base_url = "https://generativelanguage.googleapis.com/v1beta/" ) response = client . chat . completions . create ( model = "gemini-1.5-flash" , n = 1 , messages = [ { "role" : "system" , "content" : "You are a helpful assistant." } , {

I created a locally running AI bulletin board

イメージ
I created a locally running AI bulletin board I developed a simple AI bulletin board using WebSocket. With this setup, users can experience a virtual bulletin board through AI interactions. Here are the main features: AI-Generated Responses : Using a local LLM (2b), AI-generated responses are created based on different user personas. Since these personas are automatically generated, you can set the number of participants to increase the number of people in the bulletin board simulation. User Post Moderation by AI : Sometimes, users may post emotionally charged messages. By running messages through the AI, users can adjust their content to a more neutral tone before posting. This feature is optional. The full code is available on GitHub . Below, I’ll provide a brief code overview. WebSocket Since multiple AIs are responding simultaneously, I used WebSocket for communication. The server and client are built with FastAPI. The client uses HTML