Friday, 14 February 2025

AI Agent Basics - Hands on

πŸš€ Pydantic AI Agents: Making AI Work for You (Without the Sci-Fi Apocalypse) πŸ€–

Welcome, future AI overlords (or at least enthusiastic learners)! Today, we're diving into Pydantic AI Agents, a magical blend of Python, automation, and artificial intelligence that makes your life easier (and way cooler). Whether you need an assistant, a web search ninja, or a chatbot buddy, these AI agents have your back. Let's explore three awesome use cases!


1️⃣ The Basic Agent – Your AI Sidekick πŸ¦Έβ€β™‚️

Ever wished for a personal assistant who never complains, never asks for a raise, and always has an answer? Meet our Basic AI Agent, powered by Llama 3.2!

πŸ“œ Code:

from agno.agent import Agent, RunResponse
from agno.models.ollama import Ollama

agent = Agent(
    model=Ollama(id="llama3.2"),
    markdown=True
)

# Ask the AI about government stuff (or anything else!)
agent.print_response("What is the Ministry of Corporate Affairs in India? What does it do?")

πŸ’‘ What Does It Do?

  • Reads your mind (not really, but it does understand your questions).
  • Processes your input using Llama 3.2 (a powerful AI model).
  • Prints intelligent responses without an attitude.

Response:


πŸ‘‰ Real-Life Use Case: Use this agent to automate research for your projects, emails, or just settle arguments in group chats.


2️⃣ The Web Search Agent – Your AI Detective πŸ”

Tired of Googling everything and getting lost in clickbait articles? Enter the Web Search Agent, which fetches the latest news, trends, and research papers faster than your conspiracy-theory-loving uncle.

πŸ“œ Code:

# Install the tools first: pip install phidata duckduckgo-search arxiv

from agno.agent import Agent
from agno.models.ollama import Ollama
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.arxiv import ArxivTools

agent = Agent(
    model=Ollama(id="llama3.2"),
    tools=[DuckDuckGoTools(), ArxivTools()],  # Internet searching powers activated!
    show_tool_calls=True,
    markdown=True
)

# Let's dig into Reinforcement Learning research!
agent.print_response("Search arXiv for 'Reinforcement Learning'")

πŸ’‘ What Does It Do?

  • Ducks into DuckDuckGo (for the latest web news).
  • Raids ArXiv (for cutting-edge research papers).
  • Finds answers instantly, without opening 100+ browser tabs.

Response:

πŸ‘‰ Real-Life Use Case: Perfect for students, researchers, and news junkies who want real-time updates on tech, finance, or cat videos (we won’t judge).


3️⃣ The Chat Agent – Your AI BFF (That Never Ignores You) πŸ’¬

Why text real people when you can chat with an AI that actually listens? The Chat Agent is a conversational AI that responds in real-time through a slick Streamlit UI.

πŸ“œ Code (chatAgent.py):

# Install dependencies: pip install pydantic requests streamlit ollama

import streamlit as st
from pydantic import BaseModel
import ollama

# 🎭 Meet your new AI buddy!
class AIAgent(BaseModel):
    name: str = "OllamaBot"
    version: str = "1.0"
    description: str = "A chatbot powered by Ollama LLM."

agent = AIAgent()

# πŸ› οΈ Streamlit UI
st.title("πŸ€– iMMSS LLM for Legal Assistance")
st.write("Ask me anything! (Type 'exit' to stop)")

# 🎀 Keep chat history alive!
if "messages" not in st.session_state:
    st.session_state.messages = []

# πŸš€ Show chat history
for msg in st.session_state.messages:
    st.write(msg)

# 🎀 Accept user input
user_query = st.text_input("You:", "")

# 🧠 AI Response Function
def get_ai_response(question: str):
    response = ollama.chat(model="llama3.2", messages=[{"role": "user", "content": question}])
    return response["message"]["content"]

# πŸš€ Processing user input
if user_query:
    if user_query.lower() == "exit":
        st.write("πŸ‘‹ Chatbot: Goodbye! Shutting down...")
        st.stop()

    # Generate response
    ai_answer = get_ai_response(user_query)

    # Save chat history
    st.session_state.messages.append(f"**You:** {user_query}")
    st.session_state.messages.append(f"**{agent.name}:** {ai_answer}")

    # Display AI response
    st.write(f"**{agent.name}:** {ai_answer}")

πŸ’‘ What Does It Do?

  • Listens to you like a good friend (no ghosting).
  • Answers your questions instantly using Llama 3.2.
  • Keeps the conversation going (until you type "exit").
Response for  streamlit run chatAgent.py


πŸ‘‰ Real-Life Use Case: Use it for legal help, customer support, or just for fun chats when your friends are too busy watching Netflix.


πŸš€ Wrapping Up: Why Pydantic AI Agents?

These AI agents make your life easier, more fun, and way more productive by automating tasks, searching the web, and chatting in real time.

πŸ€– What You Can Build Next?

  • AI-powered customer support chatbots.
  • Real-time finance & stock market trackers.
  • Automated legal advisors (because lawyers are expensive).
  • A meme generator (because why not?).

πŸš€ Ready to start? Run the code, break things, and let AI do the boring work while you relax! πŸ˜Ž


Building an AI Agent Using Agno with Ollama 3.2

Introduction

AI agents are intelligent systems that can automate tasks, generate text, and process information. In this tutorial, we’ll explore how to create an AI agent using Agno with Ollama 3.2 to generate text-based responses.

We'll cover:

  • Installing necessary dependencies
  • Setting up an AI agent with Agno & Ollama
  • Running the agent to generate a joke
  • Expanding its capabilities

1. Installing Dependencies

First, install the required libraries:

pip install agno ollama

Ensure that Ollama is installed and running on your system. If not, download it from Ollama's official website and start the service:

ollama serve
Docker Desktop Download

2. Creating a Basic AI Agent

Now, let's create an AI agent that generates a joke.

2.1 Writing the Basic Code

# Import required modules
from agno.agent import Agent
from agno.models.ollama import Ollama

# Initialize an AI agent with Ollama
agent = Agent(
    model=Ollama(id="llama3.2"),  # Uses Ollama 3.2 model
    markdown=True  # Enables Markdown formatting for better response rendering
)

# Generate and print a joke
agent.print_response("Share a 2-sentence joke.")

2.2 Running the Code

Save the file as ai_agent.py and run:

python ai_agent.py

You'll see a joke generated in your terminal! πŸŽ­πŸ˜‚


3. Expanding the AI Agent

We can extend this agent by:

  1. Accepting user input
  2. Providing responses in an interactive loop
  3. Enhancing it with different Ollama models like Mistral, Gemma, or CodeLlama

3.1 Interactive Chatbot

Let's upgrade our agent to chat with the user.

# Interactive AI Chat Agent
from agno.agent import Agent
from agno.models.ollama import Ollama

# Create an interactive chatbot
agent = Agent(
    model=Ollama(id="mistral"),  # You can switch to "llama3.2" or any available model
    markdown=True
)

print("Welcome to the AI Chatbot! Type 'exit' to stop.")
while True:
    user_input = input("You: ")  
    if user_input.lower() == "exit":
        print("Chatbot: Goodbye!")
        break
    response = agent.get_response(user_input)
    print("Chatbot:", response.message)

How to Run:

python ai_agent.py

πŸ’¬ Example Conversation:

You: Tell me a joke.
Chatbot: Why did the scarecrow win an award? Because he was outstanding in his field!

4. Use Cases of Agno AI Agents

πŸ”₯ Fun Applications:

  • AI joke generator 🎭
  • Storytelling bot πŸ“–
  • Poetry creator ✍️

⚑ Productivity Applications:

  • AI research assistant πŸ“š
  • AI-powered coding assistant πŸ’»
  • AI Q&A chatbot πŸ€–

5. Summary

πŸš€ What We Learned: βœ… Installed and configured Agno with Ollama 3.2
βœ… Created a basic AI agent to generate jokes
βœ… Built an interactive chatbot
βœ… Explored use cases for AI agents

Now you can modify this agent to search the web, summarize text, or generate creative content! πŸ”₯ Let me know if you want to add more features! πŸš€



Dockerize the Above Chat Agent :

  1. Install Ollama in the container.
  2. Download the Llama 3.2 model inside the container.
  3. Expose Ollama for use by your chatbot.

Make A Docker file withe the following contents

πŸ“Œ Updated Dockerfile with Llama 3.2

# Use an official Python image
FROM python:3.11

# Set the working directory
WORKDIR /app

# Install system dependencies (including Ollama)
RUN apt update && apt install -y curl && \
    curl -fsSL https://ollama.com/install.sh | sh

# Add Ollama to the system PATH
ENV PATH="/root/.ollama/bin:$PATH"

# Copy the requirements file and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Pull the Llama 3.2 model into the container
RUN ollama pull llama3.2

# Copy the application code into the container
COPY . .

# Start the Ollama service and then run the chatbot
CMD ollama serve & python chatbot.py

πŸ“Œ Update requirements.txt

agno
ollama

πŸš€ Steps to Build & Run

1️⃣ Build the Docker Image

docker build -t ai-chatbot .

2️⃣ Run the Container

docker run -it --rm ai-chatbot

πŸ’‘ What This Dockerfile Does

βœ… Installs Ollama (for running Llama 3.2)
βœ… Pulls the Llama 3.2 model into the container
βœ… Starts Ollama and then runs the chatbot

Now your chatbot will run inside Docker with Llama 3.2! πŸš€ Let me know if you need further tweaks. πŸ˜ƒ

No comments:

Post a Comment

Dockerize Agent -Document Processing

 Since you are using Flask in Agent.py , I will modify the Dockerfile and steps accordingly. [WIP] 1. Update the Dockerfile Create a Do...