On this tutorial, we demonstrated how Microsoft’s AutoGen framework empowers builders to orchestrate advanced, multi-agent workflows with minimal code. By leveraging AutoGen’s RoundRobinGroupChat and TeamTool abstractions, you possibly can seamlessly assemble specialist assistants, comparable to Researchers, FactCheckers, Critics, Summarizers, and Editors, right into a cohesive “DeepDive” software. AutoGen handles the intricacies of flip‐taking, termination situations, and streaming output, permitting you to deal with defining every agent’s experience and system prompts somewhat than plumbing collectively callbacks or handbook immediate chains. Whether or not conducting in‐depth analysis, validating information, refining prose, or integrating third‐get together instruments, AutoGen gives a unified API that scales from easy two‐agent pipelines to elaborate, 5‐agent collaboratives.
!pip set up -q autogen-agentchat[gemini] autogen-ext[openai] nest_asyncio
We set up the AutoGen AgentChat package deal with Gemini assist, the OpenAI extension for API compatibility, and the nest_asyncio library to patch the pocket book’s occasion loop, making certain you might have all of the parts wanted to run asynchronous, multi-agent workflows in Colab.
import os, nest_asyncio
from getpass import getpass
nest_asyncio.apply()
os.environ["GEMINI_API_KEY"] = getpass("Enter your Gemini API key: ")
We import and apply nest_asyncio to allow nested occasion loops in pocket book environments, then securely immediate to your Gemini API key utilizing getpass and retailer it in os.environ for authenticated mannequin consumer entry.
from autogen_ext.fashions.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(
mannequin="gemini-1.5-flash-8b",
api_key=os.environ["GEMINI_API_KEY"],
api_type="google",
)
We initialize an OpenAI‐suitable chat consumer pointed at Google’s Gemini by specifying the gemini-1.5-flash-8b mannequin, injecting your saved Gemini API key, and setting api_type=”google”, supplying you with a ready-to-use model_client for downstream AutoGen brokers.
from autogen_agentchat.brokers import AssistantAgent
researcher = AssistantAgent(identify="Researcher", system_message="Collect and summarize factual data.", model_client=model_client)
factchecker = AssistantAgent(identify="FactChecker", system_message="Confirm information and cite sources.", model_client=model_client)
critic = AssistantAgent(identify="Critic", system_message="Critique readability and logic.", model_client=model_client)
summarizer = AssistantAgent(identify="Summarizer",system_message="Condense into a short govt abstract.", model_client=model_client)
editor = AssistantAgent(identify="Editor", system_message="Polish language and sign APPROVED when executed.", model_client=model_client)
We outline 5 specialised assistant brokers, Researcher, FactChecker, Critic, Summarizer, and Editor, every initialized with a role-specific system message and the shared Gemini-powered mannequin consumer, enabling them to assemble data, respectively, confirm accuracy, critique content material, condense summaries, and polish language inside the AutoGen workflow.
from autogen_agentchat.groups import RoundRobinGroupChat
from autogen_agentchat.situations import MaxMessageTermination, TextMentionTermination
max_msgs = MaxMessageTermination(max_messages=20)
text_term = TextMentionTermination(textual content="APPROVED", sources=["Editor"])
termination = max_msgs | text_term
staff = RoundRobinGroupChat(
contributors=[researcher, factchecker, critic, summarizer, editor],
termination_condition=termination
)
We import the RoundRobinGroupChat class together with two termination situations, then compose a cease rule that fires after 20 complete messages or when the Editor agent mentions “APPROVED.” Lastly, it instantiates a round-robin staff of the 5 specialised brokers with that mixed termination logic, enabling them to cycle via analysis, fact-checking, critique, summarization, and enhancing till one of many cease situations is met.
from autogen_agentchat.instruments import TeamTool
deepdive_tool = TeamTool(staff=staff, identify="DeepDive", description="Collaborative multi-agent deep dive")
WE wrap our RoundRobinGroupChat staff in a TeamTool named “DeepDive” with a human-readable description, successfully packaging your entire multi-agent workflow right into a single callable software that different brokers can invoke seamlessly.
host = AssistantAgent(
identify="Host",
model_client=model_client,
instruments=[deepdive_tool],
system_message="You may have entry to a DeepDive software for in-depth analysis."
)
We create a “Host” assistant agent configured with the shared Gemini-powered model_client, grant it the DeepDive staff software for orchestrating in-depth analysis, and prime it with a system message that informs it of its capability to invoke the multi-agent DeepDive workflow.
import asyncio
async def run_deepdive(matter: str):
outcome = await host.run(process=f"Deep dive on: {matter}")
print("🔍 DeepDive outcome:n", outcome)
await model_client.shut()
matter = "Impacts of Mannequin Context Protocl on Agentic AI"
loop = asyncio.get_event_loop()
loop.run_until_complete(run_deepdive(matter))
Lastly, we outline an asynchronous run_deepdive perform that tells the Host agent to execute the DeepDive staff software on a given matter, prints the great outcome, after which closes the mannequin consumer; it then grabs Colab’s current asyncio loop and runs the coroutine to completion for a seamless, synchronous execution.
In conclusion, integrating Google Gemini by way of AutoGen’s OpenAI‐suitable consumer and wrapping our multi‐agent staff as a callable TeamTool provides us a strong template for constructing extremely modular and reusable workflows. AutoGen abstracts away occasion loop administration (with nest_asyncio), streaming responses, and termination logic, enabling us to iterate rapidly on agent roles and total orchestration. This superior sample streamlines the event of collaborative AI techniques and lays the muse for extending into retrieval pipelines, dynamic selectors, or conditional execution methods.
Try the Notebook here. All credit score for this analysis goes to the researchers of this challenge. Additionally, be at liberty to comply with us on Twitter and don’t neglect to hitch our 95k+ ML SubReddit and Subscribe to our Newsletter.

Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.