投稿

Enabling Application Downloads in Local bolt.new

イメージ
Enabling Application Downloads in Local bolt.new In this article, I will modify bolt.new to allow applications created in the tool to be downloaded locally. This feature will facilitate internal deployment of bolt.new applications, making it particularly useful for corporate environments. Objective Add functionality to download the project files as a ZIP archive. Steps to Implement Integrate a download button in the interface Add a download button in the sidebar or toolbar. Generate a ZIP archive of the project Use a library like JSZip to bundle project files into a ZIP archive. Download the ZIP archive Trigger the browser’s download functionality with the generated ZIP file. Test the feature Ensure that the downloaded ZIP contains the expected files and directory structure. In the next article, we will cover how to modify bolt.new to integrate with Azure OpenAI Service, streamlining the application for enterprise-level use cases. Pl

Modify the local bolt.new interface to allow input of the API key

イメージ
Modify the local bolt.new interface to allow input of the API key In bolt.new , the API key can be configured using environment variables, but this time, we will modify it to allow input of the API key directly from the interface. Modification Details Sidebar We will enable API key input directly from the sidebar. In the sidebar, which currently displays chat history, we add a new form at the top for entering the API key. To achieve this, modify the file bolt.new/app/components/sidebar/Menu.client.tsx . First, import the function to handle API key input: import { ApiKeyInput } from '~/components/sidebar/ApiKeyInput'; The bolt.new/app/components/sidebar/ApiKeyInput.tsx file will be created later. Next, add a form for entering the API key within the menu. ... return ( <motion.div ref={menuRef} initial="closed" animate={open ? 'open' : 'closed'} variants={menuVariants

Running bolt.new Locally

イメージ
Running bolt.new Locally Running bolt.new Locally What is bolt.new? bolt.new is an open-source tool for creating web applications. While there is also a web version , this article focuses on how to run it locally. Environment Setup This article will guide you through the steps to run it using WSL (Windows Subsystem for Linux). Installing pnpm First, let’s install pnpm. sudo apt update suto apt install nodejs npm sudo npm -g install n sudo n stable sudo apt purge nodejs npm sudo apt autoremove sudo apt update sudo apt upgrade After restarting the terminal, check if pnpm has been installed successfully. node --version npm --version Downloading bolt.new Next, refer to bolt.new/CONTRIBUTING.md to download the environment from GitHub. git clone https://github.com/stackblitz/bolt.new.git Install the libraries on WSL. cd bolt.new pnpm install Create a .env.local file under the bolt.new folder and specify the Anthropic API key in i

Using Gemini with the OpenAI Library

イメージ
Using Gemini with the OpenAI Library Based on this article , we can now use Gemini with the OpenAI Library. So, I decided to give it a try in this article Currently, only the Chat Completion API and Embedding API are available. In this article, I tried using both Python and JavaScript. Python First, let’s set up the environment. pip install openai python-dotenv Next, let’s run the following code. import os from dotenv import load_dotenv from openai import OpenAI load_dotenv ( ) GOOGLE_API_KEY = os . getenv ( "GOOGLE_API_KEY" ) client = OpenAI ( api_key = GOOGLE_API_KEY , base_url = "https://generativelanguage.googleapis.com/v1beta/" ) response = client . chat . completions . create ( model = "gemini-1.5-flash" , n = 1 , messages = [ { "role" : "system" , "content" : "You are a helpful assistant." } , {

I created a locally running AI bulletin board

イメージ
I created a locally running AI bulletin board I developed a simple AI bulletin board using WebSocket. With this setup, users can experience a virtual bulletin board through AI interactions. Here are the main features: AI-Generated Responses : Using a local LLM (2b), AI-generated responses are created based on different user personas. Since these personas are automatically generated, you can set the number of participants to increase the number of people in the bulletin board simulation. User Post Moderation by AI : Sometimes, users may post emotionally charged messages. By running messages through the AI, users can adjust their content to a more neutral tone before posting. This feature is optional. The full code is available on GitHub . Below, I’ll provide a brief code overview. WebSocket Since multiple AIs are responding simultaneously, I used WebSocket for communication. The server and client are built with FastAPI. The client uses HTML

I tried out Granite 3.0

イメージ
I tried out Granite 3.0 Granite 3.0 Granite 3.0 is an open-source, lightweight family of generative language models designed for a range of enterprise-level tasks. It natively supports multi-language functionality, coding, reasoning, and tool usage, making it suitable for enterprise environments. I tested running this model to see what tasks it can handle. Environment Setup I set up the Granite 3.0 environment in Google Colab and installed the necessary libraries using the following commands: ! pip install torch torchvision torchaudio ! pip install accelerate ! pip install -U transformers Execution I tested the performance of both the 2B and 8B models of Granite 3.0. 2B Model I ran the 2B model . Here’s the code sample for the 2B model: import torch from transformers import AutoModelForCausalLM , AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.0-2b-instruct" tokenizer = AutoTokenizer . from_pre

Janus 1.3B: A Unified Model for Multimodal Understanding and Generation Tasks

イメージ
Janus 1.3B: A Unified Model for Multimodal Understanding and Generation Tasks Janus 1.3B Janus is a new autoregressive framework that integrates multimodal understanding and generation. Unlike previous models, which used a single visual encoder for both understanding and generation tasks, Janus introduces two separate visual encoding pathways for these functions. Differences in Encoding for Understanding and Generation In multimodal understanding tasks, the visual encoder extracts high-level semantic information such as object categories and visual attributes. This encoder focuses on inferring complex meanings, emphasizing higher-dimensional semantic elements. On the other hand, in visual generation tasks, emphasis is placed on generating fine details and maintaining overall consistency. As a result, lower-dimensional encoding that can capture spatial structures and textures is required. Setting Up the Environment Here are the steps to run Janus