π Pydantic AI Agents: Making AI Work for You (Without the Sci-Fi Apocalypse) π€
Welcome, future AI overlords (or at least enthusiastic learners)! Today, we're diving into Pydantic AI Agents, a magical blend of Python, automation, and artificial intelligence that makes your life easier (and way cooler). Whether you need an assistant, a web search ninja, or a chatbot buddy, these AI agents have your back. Let's explore three awesome use cases!
1οΈβ£ The Basic Agent β Your AI Sidekick π¦ΈββοΈ
Ever wished for a personal assistant who never complains, never asks for a raise, and always has an answer? Meet our Basic AI Agent, powered by Llama 3.2!
π Code:
from agno.agent import Agent, RunResponse
from agno.models.ollama import Ollama
agent = Agent(
model=Ollama(id="llama3.2"),
markdown=True
)
# Ask the AI about government stuff (or anything else!)
agent.print_response("What is the Ministry of Corporate Affairs in India? What does it do?")
π‘ What Does It Do?
- Reads your mind (not really, but it does understand your questions).
- Processes your input using Llama 3.2 (a powerful AI model).
- Prints intelligent responses without an attitude.
Response:
π Real-Life Use Case: Use this agent to automate research for your projects, emails, or just settle arguments in group chats.
2οΈβ£ The Web Search Agent β Your AI Detective π
Tired of Googling everything and getting lost in clickbait articles? Enter the Web Search Agent, which fetches the latest news, trends, and research papers faster than your conspiracy-theory-loving uncle.
π Code:
# Install the tools first: pip install phidata duckduckgo-search arxiv
from agno.agent import Agent
from agno.models.ollama import Ollama
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.arxiv import ArxivTools
agent = Agent(
model=Ollama(id="llama3.2"),
tools=[DuckDuckGoTools(), ArxivTools()], # Internet searching powers activated!
show_tool_calls=True,
markdown=True
)
# Let's dig into Reinforcement Learning research!
agent.print_response("Search arXiv for 'Reinforcement Learning'")
π‘ What Does It Do?
- Ducks into DuckDuckGo (for the latest web news).
- Raids ArXiv (for cutting-edge research papers).
- Finds answers instantly, without opening 100+ browser tabs.
Response:
π Real-Life Use Case: Perfect for students, researchers, and news junkies who want real-time updates on tech, finance, or cat videos (we wonβt judge).
3οΈβ£ The Chat Agent β Your AI BFF (That Never Ignores You) π¬
Why text real people when you can chat with an AI that actually listens? The Chat Agent is a conversational AI that responds in real-time through a slick Streamlit UI.
π Code (chatAgent.py
):
# Install dependencies: pip install pydantic requests streamlit ollama
import streamlit as st
from pydantic import BaseModel
import ollama
# π Meet your new AI buddy!
class AIAgent(BaseModel):
name: str = "OllamaBot"
version: str = "1.0"
description: str = "A chatbot powered by Ollama LLM."
agent = AIAgent()
# π οΈ Streamlit UI
st.title("π€ iMMSS LLM for Legal Assistance")
st.write("Ask me anything! (Type 'exit' to stop)")
# π€ Keep chat history alive!
if "messages" not in st.session_state:
st.session_state.messages = []
# π Show chat history
for msg in st.session_state.messages:
st.write(msg)
# π€ Accept user input
user_query = st.text_input("You:", "")
# π§ AI Response Function
def get_ai_response(question: str):
response = ollama.chat(model="llama3.2", messages=[{"role": "user", "content": question}])
return response["message"]["content"]
# π Processing user input
if user_query:
if user_query.lower() == "exit":
st.write("π Chatbot: Goodbye! Shutting down...")
st.stop()
# Generate response
ai_answer = get_ai_response(user_query)
# Save chat history
st.session_state.messages.append(f"**You:** {user_query}")
st.session_state.messages.append(f"**{agent.name}:** {ai_answer}")
# Display AI response
st.write(f"**{agent.name}:** {ai_answer}")
π‘ What Does It Do?
- Listens to you like a good friend (no ghosting).
- Answers your questions instantly using Llama 3.2.
- Keeps the conversation going (until you type "exit").
π Real-Life Use Case: Use it for legal help, customer support, or just for fun chats when your friends are too busy watching Netflix.
π Wrapping Up: Why Pydantic AI Agents?
These AI agents make your life easier, more fun, and way more productive by automating tasks, searching the web, and chatting in real time.
π€ What You Can Build Next?
- AI-powered customer support chatbots.
- Real-time finance & stock market trackers.
- Automated legal advisors (because lawyers are expensive).
- A meme generator (because why not?).
π Ready to start? Run the code, break things, and let AI do the boring work while you relax! π
Building an AI Agent Using Agno with Ollama 3.2
Introduction
AI agents are intelligent systems that can automate tasks, generate text, and process information. In this tutorial, weβll explore how to create an AI agent using Agno with Ollama 3.2 to generate text-based responses.
We'll cover:
- Installing necessary dependencies
- Setting up an AI agent with Agno & Ollama
- Running the agent to generate a joke
- Expanding its capabilities
1. Installing Dependencies
First, install the required libraries:
pip install agno ollama
Ensure that Ollama is installed and running on your system. If not, download it from Ollama's official website and start the service:
ollama serve
Docker Desktop Download
2. Creating a Basic AI Agent
Now, let's create an AI agent that generates a joke.
2.1 Writing the Basic Code
# Import required modules
from agno.agent import Agent
from agno.models.ollama import Ollama
# Initialize an AI agent with Ollama
agent = Agent(
model=Ollama(id="llama3.2"), # Uses Ollama 3.2 model
markdown=True # Enables Markdown formatting for better response rendering
)
# Generate and print a joke
agent.print_response("Share a 2-sentence joke.")
2.2 Running the Code
Save the file as ai_agent.py
and run:
python ai_agent.py
You'll see a joke generated in your terminal! ππ
3. Expanding the AI Agent
We can extend this agent by:
- Accepting user input
- Providing responses in an interactive loop
- Enhancing it with different Ollama models like Mistral, Gemma, or CodeLlama
3.1 Interactive Chatbot
Let's upgrade our agent to chat with the user.
# Interactive AI Chat Agent
from agno.agent import Agent
from agno.models.ollama import Ollama
# Create an interactive chatbot
agent = Agent(
model=Ollama(id="mistral"), # You can switch to "llama3.2" or any available model
markdown=True
)
print("Welcome to the AI Chatbot! Type 'exit' to stop.")
while True:
user_input = input("You: ")
if user_input.lower() == "exit":
print("Chatbot: Goodbye!")
break
response = agent.get_response(user_input)
print("Chatbot:", response.message)
How to Run:
python ai_agent.py
π¬ Example Conversation:
You: Tell me a joke.
Chatbot: Why did the scarecrow win an award? Because he was outstanding in his field!
4. Use Cases of Agno AI Agents
π₯ Fun Applications:
- AI joke generator π
- Storytelling bot π
- Poetry creator βοΈ
β‘ Productivity Applications:
- AI research assistant π
- AI-powered coding assistant π»
- AI Q&A chatbot π€
5. Summary
π What We Learned: β
Installed and configured Agno with Ollama 3.2
β
Created a basic AI agent to generate jokes
β
Built an interactive chatbot
β
Explored use cases for AI agents
Now you can modify this agent to search the web, summarize text, or generate creative content! π₯ Let me know if you want to add more features! π
Dockerize the Above Chat Agent :
- Install Ollama in the container.
- Download the Llama 3.2 model inside the container.
- Expose Ollama for use by your chatbot.
Make A Docker file withe the following contents
π Updated Dockerfile with Llama 3.2
# Use an official Python image
FROM python:3.11
# Set the working directory
WORKDIR /app
# Install system dependencies (including Ollama)
RUN apt update && apt install -y curl && \
curl -fsSL https://ollama.com/install.sh | sh
# Add Ollama to the system PATH
ENV PATH="/root/.ollama/bin:$PATH"
# Copy the requirements file and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Pull the Llama 3.2 model into the container
RUN ollama pull llama3.2
# Copy the application code into the container
COPY . .
# Start the Ollama service and then run the chatbot
CMD ollama serve & python chatbot.py
π Update requirements.txt
agno
ollama
π Steps to Build & Run
1οΈβ£ Build the Docker Image
docker build -t ai-chatbot .
2οΈβ£ Run the Container
docker run -it --rm ai-chatbot
π‘ What This Dockerfile Does
β
Installs Ollama (for running Llama 3.2)
β
Pulls the Llama 3.2 model into the container
β
Starts Ollama and then runs the chatbot
Now your chatbot will run inside Docker with Llama 3.2! π Let me know if you need further tweaks. π
No comments:
Post a Comment