Develop a Multi-Software AI Agent with Safe Python Execution utilizing Riza and Gemini


On this tutorial, we’ll harness Riza’s safe Python execution because the cornerstone of a robust, tool-augmented AI agent in Google Colab. Starting with seamless API key administration, by Colab secrets and techniques, surroundings variables, or hidden prompts, we’ll configure your Riza credentials to allow sandboxed, audit-ready code execution. We’ll combine Riza’s ExecPython software right into a LangChain agent alongside Google’s Gemini generative mannequin, outline an AdvancedCallbackHandler that captures each software invocations and Riza execution logs, and construct customized utilities for advanced math and in-depth textual content evaluation.

%pip set up --upgrade --quiet langchain-community langchain-google-genai rizaio python-dotenv


import os
from typing import Dict, Any, Listing
from datetime import datetime
import json
import getpass
from google.colab import userdata

We’ll set up and improve the core libraries, LangChain Group extensions, Google Gemini integration, Riza’s safe execution bundle, and dotenv help, quietly in Colab. We then import commonplace utilities (e.g., os, datetime, json), typing annotations, safe enter through getpass, and Colab’s consumer information API to handle surroundings variables and consumer secrets and techniques seamlessly.

def setup_api_keys():
    """Arrange API keys utilizing a number of safe strategies."""
   
    strive:
        os.environ['GOOGLE_API_KEY'] = userdata.get('GOOGLE_API_KEY')
        os.environ['RIZA_API_KEY'] = userdata.get('RIZA_API_KEY')
        print("✅ API keys loaded from Colab secrets and techniques")
        return True
    besides:
        go
   
    if os.getenv('GOOGLE_API_KEY') and os.getenv('RIZA_API_KEY'):
        print("✅ API keys present in surroundings")
        return True
   
    strive:
        if not os.getenv('GOOGLE_API_KEY'):
            google_key = getpass.getpass("🔑 Enter your Google Gemini API key: ")
            os.environ['GOOGLE_API_KEY'] = google_key
       
        if not os.getenv('RIZA_API_KEY'):
            riza_key = getpass.getpass("🔑 Enter your Riza API key: ")
            os.environ['RIZA_API_KEY'] = riza_key
       
        print("✅ API keys set securely through enter")
        return True
    besides:
        print("❌ Didn't set API keys")
        return False


if not setup_api_keys():
    print("⚠️  Please arrange your API keys utilizing one in every of these strategies:")
    print("   1. Colab Secrets and techniques: Go to 🔑 in left panel, add GOOGLE_API_KEY and RIZA_API_KEY")
    print("   2. Surroundings: Set GOOGLE_API_KEY and RIZA_API_KEY earlier than working")
    print("   3. Guide enter: Run the cell and enter keys when prompted")
    exit()

The above cell defines a setup_api_keys() perform that securely retrieves your Google Gemini and Riza API keys by first making an attempt to load them from Colab secrets and techniques, then falling again to current surroundings variables, and at last prompting you to enter them through hidden enter if wanted. If none of those strategies succeed, it prints directions on find out how to present your keys and exits the pocket book.

from langchain_community.instruments.riza.command import ExecPython
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.brokers import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage, AIMessage
from langchain.reminiscence import ConversationBufferWindowMemory
from langchain.instruments import Software
from langchain.callbacks.base import BaseCallbackHandler

We import Riza’s ExecPython software alongside LangChain’s core parts for constructing a software‐calling agent, specifically the Gemini LLM wrapper (ChatGoogleGenerativeAI), the agent executor and creation capabilities (AgentExecutor, create_tool_calling_agent), the immediate and message templates, dialog reminiscence buffer, generic Software wrapper, and the bottom callback handler for logging and monitoring agent actions. These constructing blocks allow you to assemble, configure, and observe a memory-enabled, multi-tool AI agent in Colab.

class AdvancedCallbackHandler(BaseCallbackHandler):
    """Enhanced callback handler for detailed logging and metrics."""
   
    def __init__(self):
        self.execution_log = []
        self.start_time = None
        self.token_count = 0
   
    def on_agent_action(self, motion, **kwargs):
        timestamp = datetime.now().strftime("%H:%M:%S")
        self.execution_log.append({
            "timestamp": timestamp,
            "motion": motion.software,
            "enter": str(motion.tool_input)[:100] + "..." if len(str(motion.tool_input)) > 100 else str(motion.tool_input)
        })
        print(f"🔧 [{timestamp}] Utilizing software: {motion.software}")
   
    def on_agent_finish(self, end, **kwargs):
        timestamp = datetime.now().strftime("%H:%M:%S")
        print(f"✅ [{timestamp}] Agent accomplished efficiently")
   
    def get_execution_summary(self):
        return {
            "total_actions": len(self.execution_log),
            "execution_log": self.execution_log
        }


class MathTool:
    """Superior mathematical operations software."""
   
    @staticmethod
    def complex_calculation(expression: str) -> str:
        """Consider advanced mathematical expressions safely."""
        strive:
            import math
            import numpy as np
           
            safe_dict = {
                "__builtins__": {},
                "abs": abs, "spherical": spherical, "min": min, "max": max,
                "sum": sum, "len": len, "pow": pow,
                "math": math, "np": np,
                "sin": math.sin, "cos": math.cos, "tan": math.tan,
                "log": math.log, "sqrt": math.sqrt, "pi": math.pi, "e": math.e
            }
           
            outcome = eval(expression, safe_dict)
            return f"Consequence: {outcome}"
        besides Exception as e:
            return f"Math Error: {str(e)}"


class TextAnalyzer:
    """Superior textual content evaluation software."""
   
    @staticmethod
    def analyze_text(textual content: str) -> str:
        """Carry out complete textual content evaluation."""
        strive:
            char_freq = {}
            for char in textual content.decrease():
                if char.isalpha():
                    char_freq[char] = char_freq.get(char, 0) + 1
           
            phrases = textual content.break up()
            word_count = len(phrases)
            avg_word_length = sum(len(phrase) for phrase in phrases) / max(word_count, 1)
           
            specific_chars = {}
            for char in set(textual content.decrease()):
                if char.isalpha():
                    specific_chars[char] = textual content.decrease().rely(char)
           
            evaluation = {
                "total_characters": len(textual content),
                "total_words": word_count,
                "average_word_length": spherical(avg_word_length, 2),
                "character_frequencies": dict(sorted(char_freq.objects(), key=lambda x: x[1], reverse=True)[:10]),
                "specific_character_counts": specific_chars
            }
           
            return json.dumps(evaluation, indent=2)
        besides Exception as e:
            return f"Evaluation Error: {str(e)}"

Above cell brings collectively three important items: an AdvancedCallbackHandler that captures each software invocation with a timestamped log and might summarize the overall actions taken; a MathTool class that safely evaluates advanced mathematical expressions in a restricted surroundings to stop undesirable operations; and a TextAnalyzer class that computes detailed textual content statistics, similar to character frequencies, phrase counts, and common phrase size, and returns the outcomes as formatted JSON.

def validate_api_keys():
    """Validate API keys earlier than creating brokers."""
    strive:
        test_llm = ChatGoogleGenerativeAI(
            mannequin="gemini-1.5-flash",  
            temperature=0
        )
        test_llm.invoke("check")
        print("✅ Gemini API key validated")
       
        test_tool = ExecPython()
        print("✅ Riza API key validated")
       
        return True
    besides Exception as e:
        print(f"❌ API key validation failed: {str(e)}")
        print("Please test your API keys and take a look at once more")
        return False


if not validate_api_keys():
    exit()


python_tool = ExecPython()
math_tool = Software(
    title="advanced_math",
    description="Carry out advanced mathematical calculations and evaluations",
    func=MathTool.complex_calculation
)
text_analyzer_tool = Software(
    title="text_analyzer",
    description="Analyze textual content for character frequencies, phrase statistics, and particular character counts",
    func=TextAnalyzer.analyze_text
)


instruments = [python_tool, math_tool, text_analyzer_tool]


strive:
    llm = ChatGoogleGenerativeAI(
        mannequin="gemini-1.5-flash",
        temperature=0.1,
        max_tokens=2048,
        top_p=0.8,
        top_k=40
    )
    print("✅ Gemini mannequin initialized efficiently")
besides Exception as e:
    print(f"⚠️  Gemini Professional failed, falling again to Flash: {e}")
    llm = ChatGoogleGenerativeAI(
        mannequin="gemini-1.5-flash",
        temperature=0.1,
        max_tokens=2048
    )

On this cell, we first outline and run validate_api_keys() to make sure that each the Gemini and Riza credentials work, making an attempt a dummy LLM name and instantiating the Riza ExecPython software. We exit the pocket book if validation fails. We then instantiate python_tool for safe code execution, wrap our MathTool and TextAnalyzer strategies into LangChain Software objects, and acquire them into the instruments listing. Lastly, we initialize the Gemini mannequin with customized settings (temperature, max_tokens, top_p, top_k), and if the “Professional” configuration fails, we gracefully fall again to the lighter “Flash” variant.

prompt_template = ChatPromptTemplate.from_messages([
    ("system", """You are an advanced AI assistant with access to powerful tools.


Key capabilities:
- Python code execution for complex computations
- Advanced mathematical operations
- Text analysis and character counting
- Problem decomposition and step-by-step reasoning


Instructions:
1. Always break down complex problems into smaller steps
2. Use the most appropriate tool for each task
3. Verify your results when possible
4. Provide clear explanations of your reasoning
5. For text analysis questions (like counting characters), use the text_analyzer tool first, then verify with Python if needed


Be precise, thorough, and helpful."""),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])


reminiscence = ConversationBufferWindowMemory(
    okay=5,  
    return_messages=True,
    memory_key="chat_history"
)


callback_handler = AdvancedCallbackHandler()


agent = create_tool_calling_agent(llm, instruments, prompt_template)
agent_executor = AgentExecutor(
    agent=agent,
    instruments=instruments,
    verbose=True,
    reminiscence=reminiscence,
    callbacks=[callback_handler],
    max_iterations=10,
    early_stopping_method="generate"
)

This cell constructs the agent’s “mind” and workflow: it defines a structured ChatPromptTemplate that instructs the system on its toolset and reasoning fashion, units up a sliding-window dialog reminiscence to retain the final 5 exchanges, and instantiates the AdvancedCallbackHandler for real-time logging. It then creates a software‐calling agent by binding the Gemini LLM, customized instruments, and immediate template, and wraps it in an AgentExecutor that manages execution (as much as ten steps), leverages reminiscence for context, streams verbose output, and halts cleanly as soon as the agent generates a ultimate response.

def ask_question(query: str) -> Dict[str, Any]:
    """Ask a query to the superior agent and return detailed outcomes."""
    print(f"n🤖 Processing: {query}")
    print("=" * 50)
   
    strive:
        outcome = agent_executor.invoke({"enter": query})
       
        output = outcome.get("output", "No output generated")
       
        print("n📊 Execution Abstract:")
        abstract = callback_handler.get_execution_summary()
        print(f"Instruments used: {abstract['total_actions']}")
       
        return {
            "query": query,
            "reply": output,
            "execution_summary": abstract,
            "success": True
        }
   
    besides Exception as e:
        print(f"❌ Error: {str(e)}")
        return {
            "query": query,
            "error": str(e),
            "success": False
        }


test_questions = [
    "How many r's are in strawberry?",
    "Calculate the compound interest on $1000 at 5% for 3 years",
    "Analyze the word frequency in the sentence: 'The quick brown fox jumps over the lazy dog'",
    "What's the fibonacci sequence up to the 10th number?"
]


print("🚀 Superior Gemini Agent with Riza - Prepared!")
print("🔐 API keys configured securely")
print("Testing with pattern questions...n")


outcomes = []
for query in test_questions:
    outcome = ask_question(query)
    outcomes.append(outcome)
    print("n" + "="*80 + "n")


print("📈 FINAL SUMMARY:")
profitable = sum(1 for r in outcomes if r["success"])
print(f"Efficiently processed: {profitable}/{len(outcomes)} questions")

Lastly, we outline a helper perform, ask_question(), that sends a consumer question to the agent executor, prints the query header, captures the agent’s response (or error), after which outputs a short execution abstract (displaying what number of software calls had been made). It then provides an inventory of pattern questions, overlaying counting characters, computing compound curiosity, analyzing phrase frequency, and producing a Fibonacci sequence, and iterates by them, invoking the agent on every and amassing the outcomes. After working all exams, it prints a concise “FINAL SUMMARY” indicating what number of queries had been processed efficiently, confirming that your Superior Gemini + Riza agent is up and working in Colab.

In conclusion, by centering the structure on Riza’s safe execution surroundings, we’ve created an AI agent that generates insightful responses through Gemini whereas additionally working arbitrary Python code in a completely sandboxed, monitored context. The mixing of Riza’s ExecPython software ensures that each computation, from superior numerical routines to dynamic textual content analyses, is executed with rigorous safety and transparency. With LangChain orchestrating software calls and a reminiscence buffer sustaining context, we now have a modular framework prepared for real-world duties similar to automated information processing, analysis prototyping, or academic demos.


Try the Notebook. All credit score for this analysis goes to the researchers of this mission. Additionally, be happy to comply with us on Twitter and don’t neglect to hitch our 99k+ ML SubReddit and Subscribe to our Newsletter.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.

Leave a Reply

Your email address will not be published. Required fields are marked *