Python quickstart
This guide will walk you through creating and deploying your first AI agent using the aiXplain Python SDK. You'll learn how to specify a Large Language Model (LLM), equip the agent with tools, and integrate the agent into your application.
Create and export an API Key
Create an API key on the Integrations page on Studio. Once generated,
- export it as an environment variable in your terminal or
- set it directly in your Python project - you can use the
os
module or thepython-dotenv
package &.env
file.
- MacOS / Linux
- Windows
export TEAM_API_KEY="your_api_key_here"
setx TEAM_API_KEY="your_api_key_here"
- os module
- python-dotenv package & .env file
import os
os.environ["TEAM_API_KEY"] = "your_api_key_here"
from dotenv import find_dotenv, load_dotenv # pip install python-dotenv
load_dotenv(find_dotenv()) # Load environment variables from .env file
Install the aiXplain SDK
To get started, install the aiXplain package using pip:
pip install aixplain
1. Build and deploy an Agent
- Agent
- + choose LLM
- + add tools
With the the SDK installed, copy the code below into a python file (example.py
) or jupyter notebook (example.ipynb
) code block. After a few moments, you should see the output of the agent!
from aixplain.factories import AgentFactory
from aixplain.modules.agent import ModelTool
agent = AgentFactory.create(
name="Agent",
)
agent_response = agent.run("What's an agent?")
print(agent_response)
When creating an agent, you can specify which large language model (LLM) to use. The default is OpenAI's GPT-4o, but you can choose from other LLMs available in the aiXplain marketplace.
from aixplain.factories import AgentFactory
from aixplain.modules.agent import ModelTool
agent = AgentFactory.create(
name="Agent",
llm_id="66b2708c6eb5635d1c71f611", # groq llama 3.1 70B
)
agent_response = agent.run("What's an agent?")
print(agent_response)
You can also add tools to your agent, such as models or pipelines, to extend its capabilities beyond text-based responses. Tools enable the agent to perform specialized tasks, like generating audio or processing images.
from aixplain.factories import AgentFactory
from aixplain.modules.agent import ModelTool
agent = AgentFactory.create(
name="Agent",
tools=[
ModelTool(model="6633fd59821ee31dd914e232", # speech synthesis
],
)
agent_response = agent.run("What's an agent? Answer with audio.")
print(agent_response)
You can delete an agent using the delete
method.
agent.delete()
You can instantiate your agent by searching for its ID (see below) and using the get
method.
agent = AgentFactory.get("66f744f390118d8653adcd8c")
2. Choose an LLM and add tools
You can explore the aiXplain marketplace to find the best models to integrate into your agent use case. This could be selecting a task-specific model or choosing the right large language model (LLM) to serve as your agent's core. You may also find models that are powerful enough for your use case without the need to create an agent, or that you'd prefer to combine into a pipeline.
2.1 Browse for models and tools
There are two ways to browse the assets available on marketplace: via Studio or the SDK.
- How to search the Marketplace (SDK)
- Search with Discover (Studio)
Here are three short examples of searching for assets in the SDK.
- Models
- Pipelines
- Agents
from aixplain.factories import ModelFactory
from aixplain.enums import Function
model_list = ModelFactory.list(function=Function.TEXT_GENERATION, page_size=50)["results"]
for model in model_list:
print(model.id, model.name, model.supplier)
from aixplain.factories import PipelineFactory
pipeline_list = PipelineFactory.list()["results"]
for pipeline in pipeline_list:
print(pipeline.__dict__)
from aixplain.factories import AgentFactory
agent_list = AgentFactory.list()["results"]
for agent in agent_list:
print(agent.id, agent.name)
2.2 Try a model
As with browsing, there are also two ways to try models (and pipelines):
- How to call and asset (SDK)
- Search with Discover - Try it out (Studio)
Here are three examples of calling models in the SDK.
- Text generation
- Image generation
- Speech synthesis
Suppose we searched TEXT_GENERATION
functions and want to try
- GPT-4o Mini
669a63646eb56306647e1091
.
model = ModelFactory.get("669a63646eb56306647e1091")
response = model.run("What is the capital of France?")
response
Suppose we searched TEXT_TO_IMAGE_GENERATION
functions and want to try
- Stable Diffusion XL 1.0 - Standard (1024x1024)
663bc4f76eb5637aa56d6d31
.
model = ModelFactory.get("663bc4f76eb5637aa56d6d31")
response = model.run("A dog in a bathtub wearing a sailor hat.")
response
Suppose we searched SPEECH_SYNTHESIS
functions and want to try
- English - Premium - Ivy (Child)
618ba6eae2e1a9153ca2a3ba
- English (India) - Premium
6171eec2c714b775a4b48caf
- English - Premium - Aria
618ba6e8e2e1a9153ca2a3b4
responses = [
ModelFactory.get(model.id).run("Hi! Hope you're having a lovely day!")
for model in models
]
responses
3. Next
Once you’ve built your first agent, there are additional features you may want to add to enhance its capabilities.
3.1 Multi-Agent Systems
You can orchestrate multiple agents to collaborate on complex tasks, forming what we call a "Team Agent." These agents can work together by delegating tasks, improving efficiency, and ensuring thorough task execution. For more detailed guidance, visit our Team Agent guide.
3.2 Memory
Agents can benefit from memory, allowing them to retain context over long interactions and improve their decision-making. You can read more on adding memory to your agents in the Agent Memory guide.
Ready to get started?
If you’re interested in building a multi-agent system or adding memory to your agent, check out the How to Build an Agent guide for a step-by-step process to set up these advanced features.