Python quickstart
This guide will walk you through creating and deploying your first AI agent using the aiXplain Python SDK. You'll learn how to specify a Large Language Model (LLM), equip the agent with tools, and integrate the agent into your application.
Create and export an API Key
Create an API key on the Integrations page on Studio. Once generated,
- export it as an environment variable in your terminal or
- set it directly in your Python project - you can use the
os
module or thepython-dotenv
package &.env
file.
- MacOS / Linux
- Windows
export TEAM_API_KEY="your_api_key_here"
setx TEAM_API_KEY="your_api_key_here"
- os module
- python-dotenv package & .env file
import os
os.environ["TEAM_API_KEY"] = "your_api_key_here"
from dotenv import find_dotenv, load_dotenv # pip install python-dotenv
load_dotenv(find_dotenv()) # Load environment variables from .env file
Install the aiXplain SDK
To get started, install the aiXplain package using pip:
pip install aixplain
1. Create an Agent
With the SDK installed, run the code below, providing a name and description for your agent.
from aixplain.factories import AgentFactory
from aixplain.modules.agent import ModelTool
agent = AgentFactory.create(
name="Agent",
description="A conversational AI agent that uses tools and models to answer questions and perform tasks."
)
2. Choose LLMs and tools
Visit the aiXplain marketplace to select an LLM and a set of tools, which can include AI models, utilities, or pipelines. Here, you can explore and find the best models to integrate into your agent use case—whether that means selecting the right large language model (LLM) to serve as your agent’s core, or identifying powerful tools that fit your use case independently or as part of a combined pipeline solution.
2.1 Browse for LLMs and tools
There are two ways to browse the assets available on marketplace:
- Browse in Discover. Learn more here.
- List in the SDK.
Here are examples of how you can search for assets in the SDK.
- LLM
- Models
- Utilities
- Pipelines
from aixplain.factories import ModelFactory
from aixplain.enums import Function
model_list = ModelFactory.list(function=Function.TEXT_GENERATION, page_size=50)["results"]
for model in model_list:
print(model.id, model.name, model.supplier)
from aixplain.factories import ModelFactory
from aixplain.enums import Function
model_list = ModelFactory.list(function=Function.SPEECH_SYNTHESIS, page_size=50)["results"]
for model in model_list:
print(model.id, model.name, model.supplier)
from aixplain.factories import ModelFactory
from aixplain.enums import Function
model_list = ModelFactory.list(function=Function.UTILITIES, page_size=50)["results"]
for model in model_list:
print(model.id, model.name, model.supplier)
from aixplain.factories import PipelineFactory
pipeline_list = PipelineFactory.list()["results"]
for pipeline in pipeline_list:
print(pipeline.__dict__)
2.2 Try a model
After browsing, you can call an asset using the examples shown below.
- LLM
- Image Generation
- Speech Synthesis
- Google Search
In this example, we will use the GPT-4o Mini model.
model = ModelFactory.get("669a63646eb56306647e1091")
response = model.run("What is the capital of France?")
response
In this example, we will use the Stable Diffusion XL 1.0 - Standard (1024x1024) model.
model = ModelFactory.get("663bc4f76eb5637aa56d6d31")
response = model.run("A dog in a bathtub wearing a sailor hat.")
response
In this example, we will use the Speech Synthesis - English (India) - Premium - B-MALE model
model = ModelFactory.get("6171eec2c714b775a4b48caf")
response = model.run("Hi! Hope you're having a lovely day!.")
response
In this example, we will use the Google Search model.
model = ModelFactory.get("65c51c556eb563350f6e1bb1")
response = model.run("What is an agent?")
response
Then combine your LLMs and tools to build an agent!
from aixplain.factories import AgentFactory
from aixplain.modules.agent import ModelTool
agent = AgentFactory.create(
name="Agent",
description="An AI agent powered by the Groq LLaMA 3.1 70B model and speech synthesis capabilities, designed to provide informative and audio-enabled responses.",
tools=[
ModelTool(model="6171eec2c714b775a4b48caf") # speech syntehsis model
],
llm_id="66b2708c6eb5635d1c71f611" # groq llama 3.1 70B
)
3. Run and deloy an Agent
Run and deploy an agent to see its response in action.
agent_response = agent.run("What's an agent?")
print(agent_response)
4. Use Agent Memory
Include the session_id
from the first query in subsequent queries to maintain the agent's history.
session_id = response["data"]["session_id"]
print(f"Session id: {session_id}")
agent_response = agent.run(
"What makes this text interesting?",
session_id=session_id,
)
display(agent_response)
Next - Create a Team Agent
You can orchestrate multiple agents to collaborate on complex tasks, forming what we call a “Team Agent.” These agents can work together by delegating tasks, improving efficiency, and ensuring thorough task execution. For more detailed guidance, visit our Team Agent guide.