Skip to main content

Python quickstart

This guide will walk you through creating and deploying your first AI agent using the aiXplain Python SDK. You'll learn how to specify a Large Language Model (LLM), equip the agent with tools, and integrate the agent into your application.

Create and export an API Key

Create an API key on the Integrations page on Studio. Once generated,

  1. export it as an environment variable in your terminal or
  2. set it directly in your Python project - you can use the os module or the python-dotenv package & .env file.
export TEAM_API_KEY="your_api_key_here"
import os
os.environ["TEAM_API_KEY"] = "your_api_key_here"

Install the aiXplain SDK

To get started, install the aiXplain package using pip:

pip install aixplain

1. Build and deploy an Agent

With the the SDK installed, copy the code below into a python file (example.py) or jupyter notebook (example.ipynb) code block. After a few moments, you should see the output of the agent!

from aixplain.factories import AgentFactory
from aixplain.modules.agent import ModelTool

agent = AgentFactory.create(
name="Agent",
)



agent_response = agent.run("What's an agent?")
print(agent_response)
Show output
tip

You can delete an agent using the delete method.

agent.delete()
tip

You can instantiate your agent by searching for its ID (see below) and using the get method.

agent = AgentFactory.get("66f744f390118d8653adcd8c")

2. Choose an LLM and add tools

You can explore the aiXplain marketplace to find the best models to integrate into your agent use case. This could be selecting a task-specific model or choosing the right large language model (LLM) to serve as your agent's core. You may also find models that are powerful enough for your use case without the need to create an agent, or that you'd prefer to combine into a pipeline.

2.1 Browse for models and tools

There are two ways to browse the assets available on marketplace: via Studio or the SDK.

Here are three short examples of searching for assets in the SDK.

from aixplain.factories import PipelineFactory


pipeline_list = PipelineFactory.list()["results"]

for pipeline in pipeline_list:
print(pipeline.__dict__)
Show output

2.2 Try a model

As with browsing, there are also two ways to try models (and pipelines):

Here are three examples of calling models in the SDK.

Suppose we searched TEXT_GENERATION functions and want to try

  • GPT-4o Mini 669a63646eb56306647e1091.
model = ModelFactory.get("669a63646eb56306647e1091")

response = model.run("What is the capital of France?")
response
Show output

3. Next

Once you’ve built your first agent, there are additional features you may want to add to enhance its capabilities.

3.1 Multi-Agent Systems

You can orchestrate multiple agents to collaborate on complex tasks, forming what we call a "Team Agent." These agents can work together by delegating tasks, improving efficiency, and ensuring thorough task execution. For more detailed guidance, visit our Team Agent guide.

3.2 Memory

Agents can benefit from memory, allowing them to retain context over long interactions and improve their decision-making. You can read more on adding memory to your agents in the Agent Memory guide.

Ready to get started?

If you’re interested in building a multi-agent system or adding memory to your agent, check out the How to Build an Agent guide for a step-by-step process to set up these advanced features.