Skip to main content

How to integrate an asset

This guide helps you understand how to integrate models, pipelines and agents into your workflows seamlessly.

Models

To find all the models available on aiXplain, explore our marketplace.

To see the list of available models, run the code below.

from aixplain.factories import ModelFactory
from aixplain.enums import Supplier

model_list = ModelFactory.list(suppliers=Supplier.GROQ)["results"]
for model in model_list:
print(model.__dict__)
Show output

The aiXplain SDK allows you to run models synchronously or asynchronously. The examples shown, use the GPT-4 model.

Synchronous

from aixplain.factories import ModelFactory

model = ModelFactory.get("6414bd3cd09663e9225130e8")

result = model.run({
"text": "TEXT_DATA",
# "prompt": "<PROMPT_TEXT_DATA>",
# "context": "<CONTEXT_TEXT_DATA>",
# "temperature": "<TEMPERATURE_TEXT_DATA>",
# "max_tokens": "<MAX_TOKENS_TEXT_DATA>",
# "history": "<HISTORY_TEXT_DATA>",
})
print(result)

Asynchronous

import time

from aixplain.factories import ModelFactory

model = ModelFactory.get("6414bd3cd09663e9225130e8")

start_response = model.run_async({
"text": "TEXT_DATA",
# "prompt": "<PROMPT_TEXT_DATA>",
# "context": "<CONTEXT_TEXT_DATA>",
# "temperature": "<TEMPERATURE_TEXT_DATA>",
# "max_tokens": "<MAX_TOKENS_TEXT_DATA>",
# "history": "<HISTORY_TEXT_DATA>",
})

# Polling loop: Wait for the completion of the asynchronous request
while True:
status = model.poll(start_response['url'])
print(status)
if status['status'] != 'IN_PROGRESS':
break
time.sleep(5)

Pipelines

To learn more about how to build pipelines, follow this guide.

To display pipelines that you have onboarded, run the code below.

from aixplain.factories import PipelineFactory

pipeline_list = PipelineFactory.list()['results']
for pipeline in pipeline_list:
print(pipeline.__dict__)
Show output

You can call a pipeline using this code.

pipeline = PipelineFactory.get('<pipeline_id>')

Then run the pipeline either synchronously or asynchronously.

Synchronous

result = pipeline.run("This is a sample text")

For multi-input pipelines, you can specify as input a dictionary where the keys are the label names of the input nodes and values are their corresponding content.

result = pipeline.run({
"Input 1": "This is a sample text to input node 1.",
"Input 2": "This is a sample text to input node 2."
})
# or
result = pipeline.run(data = {
"Input 1": "This is a sample text to input node 1.",
"Input 2": "This is a sample text to input node 2."
})

Asynchronous

import time

from aixplain.factories import PipelineFactory

pipeline = PipelineFactory.get("<pipeline_id>")

start_response = pipeline.run_async("This is a sample text")

# Polling loop: Wait for the completion of the asynchronous request
while True:
result = model.poll(poll_url)
if result.get("completed"):
print(result)
break
else:
time.sleep(5) # Wait for 5 seconds before checking the result again

Agents

Learn more about building agents through this guide.

To display agents that you have onboarded, run the code below.

from aixplain.factories import AgentFactory

agent_list = AgentFactory.list()["results"]
for agent in agent_list:
print(agent.__dict__)

Once you know an agent's unique ID, you can access the agent directly.

agent = AgentFactory.get("<agent_id>")
agent.__dict__

Run the agent using the following code

agent_response = agent.run(
"This is an example"
)

display(agent_response)

API Requests

Models

Run a model on aiXplain with POST requests and fetch results with GET. The example here utilises the GPT-4 model.

import requests
import time

AIXPLAIN_API_KEY = "TEAM_API_KEY"
MODEL_ID = "6414bd3cd09663e9225130e8"
POST_URL = f"https://models.aixplain.com/api/v1/execute/{MODEL_ID}"

headers = {
"x-api-key": AIXPLAIN_API_KEY,
"Content-Type": "application/json"
}

data = {
"text": "<TEXT_TEXT_DATA>",
# "prompt": "<PROMPT_TEXT_DATA>",
# "context": "<CONTEXT_TEXT_DATA>",
# "temperature": "<TEMPERATURE_TEXT_DATA>",
# "max_tokens": "<MAX_TOKENS_TEXT_DATA>",
# "history": "<HISTORY_TEXT_DATA>"
}

# POST request to execute the model
response = requests.post(POST_URL, headers=headers, json=data)
response_data = response.json()
request_id = response_data.get("requestId")

get_url = f"https://models.aixplain.com/api/v1/data/{request_id}"

# Polling loop: GET request until the result is completed
while True:
get_response = requests.get(get_url, headers=headers)
result = get_response.json()

if result.get("completed"):
print(result)
break
else:
time.sleep(5) # Wait for 5 seconds before checking the result again

Pipelines

Execute pipelines with inputs and retrieve results using requests.

import requests
import time

AIXPLAIN_API_KEY = "TEAM_API_KEY"
PIPELINE_ID = "<pipeline_id>"
POST_URL = f"https://platform-api.aixplain.com/assets/pipeline/execution/run/{PIPELINE_ID}"

headers = {
'x-api-key': AIXPLAIN_API_KEY,
'Content-Type': 'application/json'
}

data = {
"Input 1": "<INPUT_1_TEXT_DATA>",
"Input 2": "<INPUT_2_AUDIO_DATA>"
}

# POST request to execute the pipeline
response = requests.post(POST_URL, headers=headers, json=data)
response_data = response.json()
get_url = response_data.get("url")

# Polling loop: GET request until the result is completed
while True:
get_response = requests.get(get_url, headers=headers)
result = get_response.json()

if result.get("completed"):
print(result)
break
else:
time.sleep(5) # Wait for 5 seconds before checking the result again

Agents

Send queries to aiXplain agents and fetch the results using API requests.

import requests
import time

AIXPLAIN_API_KEY = "TEAM_API_KEY"
AGENT_ID = "<agent_id>"
POST_URL = f"https://platform-api.aixplain.com/sdk/agents/{AGENT_ID}/run"

headers = {
"x-api-key": AIXPLAIN_API_KEY,
"Content-Type": 'application/json'
}

data = {
"query": "<QUERY_TEXT_DATA>",
# "sessionId": "<SESSIONID_TEXT_DATA>", # Optional: Specify sessionId from the previous message
}

# POST request to execute the agent
response = requests.post(POST_URL, headers=headers, json=data)
response_data = response.json()
request_id = response_data.get("requestId")

get_url = f"https://platform-api.aixplain.com/sdk/agents/{request_id}/result"

# Polling loop: GET request until the result is completed
while True:
get_response = requests.get(get_url, headers=headers)
result = get_response.json()

if result.get("completed"):
print(result)
break
else:
time.sleep(5) # Wait for 5 seconds before checking the result again

OpenAI API

You can integrate aixplain endpoints into your OpenAI API. Here is an example to help you get started.

import openai


openai.api_key = "<OPENAI_API_KEY"
openai.api_base = "https://models.aixplain.com/api/v1/execute/<model_id>" # aiXplain model endpoint


response = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the importance of AI in modern industries?"}
],
temperature=0.7,
max_tokens=150,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0
)

print("Response from aiXplain:")
print(response)