Skip to main content

Python quickstart

This guide will walk you through creating and deploying your first AI agent using the aiXplain Python SDK. You'll learn how to specify a Large Language Model (LLM), equip the agent with tools, and integrate the agent into your application.

Create and export an API Key

Create an API key on the Integrations page on Studio. Once generated,

  1. export it as an environment variable in your terminal or
  2. set it directly in your Python project - you can use the os module or the python-dotenv package & .env file.
export TEAM_API_KEY="your_api_key_here"
import os
os.environ["TEAM_API_KEY"] = "your_api_key_here"

Install the aiXplain SDK

To get started, install the aiXplain package using pip:

pip install aixplain

1. Create an Agent

With the SDK installed, run the code below, providing a name and description for your agent.

from aixplain.factories import AgentFactory
from aixplain.modules.agent import ModelTool

agent = AgentFactory.create(
name="Agent",
description="A conversational AI agent that uses tools and models to answer questions and perform tasks."
)

2. Choose LLMs and tools

Visit the aiXplain marketplace to select an LLM and a set of tools, which can include AI models, utilities, or pipelines. Here, you can explore and find the best models to integrate into your agent use case—whether that means selecting the right large language model (LLM) to serve as your agent’s core, or identifying powerful tools that fit your use case independently or as part of a combined pipeline solution.

tip

Our agents are currently optimized for GPT-4o and Llama 3.1. However, they are designed to support and integrate with any LLM, offering flexibility and adaptability.

2.1 Browse for LLMs and tools

There are two ways to browse the assets available on marketplace:

Here are examples of how you can search for assets in the SDK.

from aixplain.factories import ModelFactory
from aixplain.enums import Function

model_list = ModelFactory.list(function=Function.TEXT_GENERATION, page_size=50)["results"]

for model in model_list:
print(model.id, model.name, model.supplier)

2.2 Try a model

After browsing, you can call an asset using the examples shown below.

In this example, we will use the GPT-4o Mini model.

model = ModelFactory.get("669a63646eb56306647e1091")

response = model.run("What is the capital of France?")
response
Show output

Then combine your LLMs and tools to build an agent!

from aixplain.factories import AgentFactory
from aixplain.modules.agent import ModelTool

agent = AgentFactory.create(
name="Agent",
description="An AI agent powered by the Groq LLaMA 3.1 70B model and speech synthesis capabilities, designed to provide informative and audio-enabled responses.",
tools=[
ModelTool(model="6171eec2c714b775a4b48caf") # speech syntehsis model
],
llm_id="66b2708c6eb5635d1c71f611" # groq llama 3.1 70B
)

3. Run and deloy an Agent

Run and deploy an agent to see its response in action.

agent_response = agent.run("What's an agent?")
print(agent_response)
Show output

4. Use Agent Memory

Include the session_id from the first query in subsequent queries to maintain the agent's history.

session_id = response["data"]["session_id"]
print(f"Session id: {session_id}")
Show output
agent_response = agent.run(
"What makes this text interesting?",
session_id=session_id,
)

display(agent_response)
Show output

Next - Create a Team Agent

You can orchestrate multiple agents to collaborate on complex tasks, forming what we call a “Team Agent.” These agents can work together by delegating tasks, improving efficiency, and ensuring thorough task execution. For more detailed guidance, visit our Team Agent guide.