Skip to main content

Overview

Models

aiXplain has an ever-expanding catalog of ready-to-use AI models by different suppliers (e.g. AWS, Microsoft, Google, Meta, etc.) for various functions (e.g. Machine Translation, Speech Recognition, Large Language Modeling, Sentiment Analysis, etc.). These models are available on-demand, can be connected together into Pipelines and Agents.

Docusaurus themed imageDocusaurus themed image

Curation

aiXplain's models are categorized using several filters, such as Function, Supplier and Modalities (e.g. source language, target language) to make searching easier.

Standardization

All aiXplain models can be run using the same syntax via the SDK, and our standardizations allow for model swapping in your pipelines.

Onboarding

You can onboard your own models onto aiXplain, making them accessible for deployment and utilization within your applications.

How to search the marketplace

This will walk you through the process of searching the marketplace for various aiXplain assets including models, metrics, and data.

Models

aiXplain's collection of models are searchable using queries, filters and are directly accessible using thier IDs (unique identifiers).

Models are searchable on the SDK using the following search parameters:

ParameterTypeDefaultDescription
queryOptional[Text]""Search query to filter models based on their name or description.
functionOptional[Function]NoneAI function filter (e.g., Translation, Text Generation).
suppliersOptional[Union[Supplier, List[Supplier]]]NoneFilter models by suppliers (e.g., AWS, Google, Microsoft).
source_languagesOptional[Union[Language, List[Language]]]NoneFilter models based on their input language(s).
target_languagesOptional[Union[Language, List[Language]]]NoneFilter models based on their output language(s).
is_finetunableOptional[bool]NoneSpecify if models should support fine-tuning.
ownershipOptional[Tuple[OwnershipType, List[OwnershipType]]]NoneFilter models by ownership type (e.g., SUBSCRIBED, OWNER).
sort_byOptional[SortBy]NoneAttribute to sort the retrieved models (e.g., name, creation date).
sort_orderSortOrderSortOrder.ASCENDINGSpecify the sorting order (ascending or descending).
page_numberint0Page number for paginated results.
page_sizeint20Number of results to retrieve per page.

Let's use query, function, source_languages, target_languages and suppliers to search for translation models from English to Canadian French.

from aixplain.factories import ModelFactory
from aixplain.enums import Function, Language, Supplier

model_list = ModelFactory.list(
"Canada",
function=Function.TRANSLATION,
source_languages=Language.English,
target_languages=Language.French,
suppliers=[Supplier.AWS, Supplier.GOOGLE, Supplier.MICROSOFT],
)["results"]

for model in model_list:
print(model.__dict__)
Show output
tip

Use the _member_names_ attribute to see the list of available function types, languages and suppliers.

Function._member_names_
Show output
Language._member_names_
Show output
Supplier._member_names_
Show output

Direct Access

Once you know a model's ID, you can access the model directly (without searching for it).

EXAMPLE OpenAI's GPT-4 model has ID 6414bd3cd09663e9225130e8.

Instantiate a model object

from aixplain.factories import ModelFactory
model = ModelFactory.get('6414bd3cd09663e9225130e8')
model.__dict__
Show output

Once you have identified a suitable model, you can use it for inference, benchmarking, finetuning, or integrating it into custom AI pipelines.

Next, create or integrate these assets into AI agents to enhance your applications.

How to call an asset

The aiXplain SDK allows you to run models synchronously (Python) or asynchronously (Python and Swift). You can also process Data assets (Python).

Let's use Groq's Llama 70B as an example.

Docusaurus themed imageDocusaurus themed image
from aixplain.factories import ModelFactory
from aixplain.enums import Supplier

model_list = ModelFactory.list(suppliers=Supplier.GROQ)["results"]
for model in model_list:
print(model.__dict__)
Show output
model = ModelFactory.get("6626a3a8c8f1d089790cf5a2")
note

Model (and pipeline) inputs can be URLs, file paths, or direct text/labels (if applicable).
The examples below use only direct text.

Synchronous

model.run("Tell me a joke about dogs.")
Show output

Use a dictionary to specify additional parameters or if the model takes multiple inputs.

model.run(
{
"text": "Tell me a joke about dogs.",
"max_tokens": 10,
"temperature": 0.5,
}
)
Show output

Asynchronous

start_response = model.run_async("Tell me a joke about dogs.")
start_response
Show output

Use the poll method to monitor inference progress.

while True:
status = model.poll(start_response['url'])
print(status)
if status['status'] != 'IN_PROGRESS':
break
time.sleep(1) # wait for 1 second before checking again
Show output

Process Data Assets

You can also perform inference on Data assets (Corpora or Datasets). You will need to onboard a data asset to use it.

note

Inference on Data assets is only available in Python.

Each data asset has an ID, and each column in that data asset has an ID, too. Specify both to perform inference:

Run

result = model.run(
data="64acbad666608858f693a3a0",
data_asset="64acbad666608858f693a39f"
)

Run Async

start_response = model.run_async(
data="64acbad666608858f693a3a0",
data_asset="64acbad666608858f693a39f"
)

With these steps, you can easily call models, pipelines, and integrate them into your agents to perform AI tasks.