aixplain.modules.model
__author__
Copyright 2022 The aiXplain SDK authors
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Author: Duraikrishna Selvaraju, Thiago Castro Ferreira, Shreyas Sharma and Lucas Pavanelli Date: September 1st 2022 Description: Model Class
Model Objects
class Model(Asset)
A ready-to-use AI model that can be executed synchronously or asynchronously.
This class represents a deployable AI model in the aiXplain platform. It provides functionality for model execution, parameter management, and status tracking. Models can be run with both synchronous and asynchronous APIs, and some models support streaming responses.
Attributes:
idText - ID of the model.nameText - Name of the model.descriptionText - Detailed description of the model's functionality.api_keyText - Authentication key for API access.urlText - Endpoint URL for model execution.supplierUnion[Dict, Text, Supplier, int] - Provider/creator of the model.versionText - Version identifier of the model.functionFunction - The AI function this model performs.backend_urlstr - Base URL for the backend API.costDict - Pricing information for model usage.name0 ModelParameters - Parameters accepted by the model.name1 Dict - Description of model outputs.name2 ModelParameters - Configuration parameters for model behavior.name3 bool - Whether the model supports streaming responses.name4 FunctionType - Category of function (AI, UTILITY, etc.).name5 bool - Whether the user has an active subscription.name6 datetime - When the model was created.name7 AssetStatus - Current status of the model.name8 dict - Additional model metadata.
__init__
def __init__(id: Text,
name: Text = "",
description: Text = "",
api_key: Text = config.TEAM_API_KEY,
supplier: Union[Dict, Text, Supplier, int] = "aiXplain",
version: Optional[Text] = None,
function: Optional[Function] = None,
is_subscribed: bool = False,
cost: Optional[Dict] = None,
created_at: Optional[datetime] = None,
input_params: Optional[Dict] = None,
output_params: Optional[Dict] = None,
model_params: Optional[Dict] = None,
supports_streaming: bool = False,
status: Optional[AssetStatus] = AssetStatus.ONBOARDED,
function_type: Optional[FunctionType] = FunctionType.AI,
**additional_info) -> None
Initialize a new Model instance.
Arguments:
idText - ID of the Model.nameText, optional - Name of the Model. Defaults to "".descriptionText, optional - Description of the Model. Defaults to "".api_keyText, optional - Authentication key for API access. Defaults to config.TEAM_API_KEY.supplierUnion[Dict, Text, Supplier, int], optional - Provider/creator of the model. Defaults to "aiXplain".versionText, optional - Version identifier of the model. Defaults to None.functionFunction, optional - The AI function this model performs. Defaults to None.is_subscribedbool, optional - Whether the user has an active subscription. Defaults to False.costDict, optional - Pricing information for model usage. Defaults to None.created_atOptional[datetime], optional - When the model was created. Defaults to None.name0 Dict, optional - Parameters accepted by the model. Defaults to None.name1 Dict, optional - Description of model outputs. Defaults to None.name2 Dict, optional - Configuration parameters for model behavior. Defaults to None.name3 bool, optional - Whether the model supports streaming responses. Defaults to False.name4 AssetStatus, optional - Current status of the model. Defaults to AssetStatus.ONBOARDED.name5 FunctionType, optional - Category of function. Defaults to FunctionType.AI.name6 - Additional model metadata.
to_dict
def to_dict() -> Dict
Convert the model instance to a dictionary representation.
Returns:
Dict- A dictionary containing the model's configuration with keys:- id: Unique identifier
- name: Model name
- description: Model description
- supplier: Model provider
- additional_info: Extra metadata (excluding None/empty values)
- input_params: Input parameter configuration
- output_params: Output parameter configuration
- model_params: Model behavior parameters
- function: AI function type
- status: Current model status
get_parameters
def get_parameters() -> Optional[ModelParameters]
Get the model's configuration parameters.
Returns:
Optional[ModelParameters]- The model's parameter configuration if set, None otherwise.
__repr__
def __repr__() -> str
Return a string representation of the model.
Returns:
str- A string in the format "Model: <name> by <supplier> (id=<id>)".
sync_poll
def sync_poll(poll_url: Text,
name: Text = "model_process",
wait_time: float = 0.5,
timeout: float = 300) -> ModelResponse
Poll the platform until an asynchronous operation completes or times out.
This method repeatedly checks the status of an asynchronous operation, implementing exponential backoff for the polling interval.
Arguments:
poll_urlText - URL to poll for operation status.nameText, optional - Identifier for the operation for logging. Defaults to "model_process".wait_timefloat, optional - Initial wait time in seconds between polls. Will increase exponentially up to 60 seconds. Defaults to 0.5.timeoutfloat, optional - Maximum total time to poll in seconds. Defaults to 300.
Returns:
ModelResponse- The final response from the operation. If polling times out or fails, returns a failed response with appropriate error message.
Notes:
The minimum wait time between polls is 0.2 seconds. The wait time increases by 10% after each poll up to a maximum of 60 seconds.
poll
def poll(poll_url: Text, name: Text = "model_process") -> ModelResponse
Make a single poll request to check operation status.
Arguments:
poll_urlText - URL to poll for operation status.nameText, optional - Identifier for the operation for logging. Defaults to "model_process".
Returns:
ModelResponse- The current status of the operation. Contains completion status, any results or errors, and usage statistics.
Notes:
This is a low-level method used by sync_poll. Most users should use sync_poll instead for complete operation handling.
run_stream
def run_stream(data: Union[Text, Dict],
parameters: Optional[Dict] = None) -> ModelResponseStreamer
Execute the model with streaming response.
Arguments:
dataUnion[Text, Dict] - The input data for the model.parametersOptional[Dict], optional - Additional parameters for model execution. Defaults to None.
Returns:
ModelResponseStreamer- A streamer object that yields response chunks.
Raises:
AssertionError- If the model doesn't support streaming.
run
def run(data: Union[Text, Dict],
name: Text = "model_process",
timeout: float = 300,
parameters: Optional[Dict] = None,
wait_time: float = 0.5,
stream: bool = False) -> Union[ModelResponse, ModelResponseStreamer]
Execute the model and wait for results.
This method handles both synchronous and streaming execution modes. For asynchronous operations, it polls until completion or timeout.
Arguments:
dataUnion[Text, Dict] - The input data for the model.nameText, optional - Identifier for the operation for logging. Defaults to "model_process".timeoutfloat, optional - Maximum time to wait for completion in seconds. Defaults to 300.parametersDict, optional - Additional parameters for model execution. Defaults to None.wait_timefloat, optional - Initial wait time between polls in seconds. Defaults to 0.5.streambool, optional - Whether to use streaming mode. Requires model support. Defaults to False.
Returns:
Union[ModelResponse, ModelResponseStreamer]: The model's response. For streaming mode, returns a streamer object. For regular mode, returns a response object with results or error information.
Notes:
If the model execution becomes asynchronous, this method will poll for completion using sync_poll with the specified timeout and wait_time.
run_async
def run_async(data: Union[Text, Dict],
name: Text = "model_process",
parameters: Optional[Dict] = None) -> ModelResponse
Start asynchronous model execution.
This method initiates model execution but doesn't wait for completion. Use sync_poll to check the operation status later.
Arguments:
dataUnion[Text, Dict] - The input data for the model.nameText, optional - Identifier for the operation for logging. Defaults to "model_process".parametersDict, optional - Additional parameters for model execution. Defaults to None.
Returns:
ModelResponse- Initial response containing:- status: Current operation status
- url: URL for polling operation status
- error_message: Any immediate errors
- other response metadata
check_finetune_status
def check_finetune_status(after_epoch: Optional[int] = None)
Check the status of the FineTune model.
Arguments:
after_epochOptional[int], optional - status after a given epoch. Defaults to None.
Raises:
Exception- If the 'TEAM_API_KEY' is not provided.
Returns:
FinetuneStatus- The status of the FineTune model.
delete
def delete() -> None
Delete this model from the aiXplain platform.
This method attempts to delete the model from the platform. It will fail if the user doesn't have appropriate permissions.
Raises:
Exception- If deletion fails or if the user doesn't have permission.
add_additional_info_for_benchmark
def add_additional_info_for_benchmark(display_name: str,
configuration: Dict) -> None
Add benchmark-specific information to the model.
This method updates the model's additional_info with benchmark-related metadata.
Arguments:
display_namestr - Name for display in benchmarks.configurationDict - Model configuration settings for benchmarking.
from_dict
@classmethod
def from_dict(cls, data: Dict) -> "Model"
Create a Model instance from a dictionary representation.
Arguments:
dataDict - Dictionary containing model configuration with keys:- id: Model identifier
- name: Model name
- description: Model description
- api_key: API key for authentication
- supplier: Model provider information
- version: Model version
- function: AI function type
- is_subscribed: Subscription status
- cost: Pricing information
- created_at: Creation timestamp (ISO format)
- input_params: Input parameter configuration
- output_params: Output parameter configuration
- model_params: Model behavior parameters
- additional_info: Extra metadata
Returns:
Model- A new Model instance populated with the dictionary data.