I tried out Granite 3.0
I tried out Granite 3.0 Granite 3.0 Granite 3.0 is an open-source, lightweight family of generative language models designed for a range of enterprise-level tasks. It natively supports multi-language functionality, coding, reasoning, and tool usage, making it suitable for enterprise environments. I tested running this model to see what tasks it can handle. Environment Setup I set up the Granite 3.0 environment in Google Colab and installed the necessary libraries using the following commands: ! pip install torch torchvision torchaudio ! pip install accelerate ! pip install -U transformers Execution I tested the performance of both the 2B and 8B models of Granite 3.0. 2B Model I ran the 2B model . Here’s the code sample for the 2B model: import torch from transformers import AutoModelForCausalLM , AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.0-2b-instruct" tokenizer = AutoTokenizer . from_pre