Skip to main content
Version: 1.0

aixplain.modules.pipeline.pipeline

Auto-generated pipeline module containing node classes and Pipeline factory methods.

TextNormalizationInputs Objects

class TextNormalizationInputs(Inputs)

[view_source]

Input parameters for TextNormalization.

__init__

def __init__(node=None)

[view_source]

Initialize TextNormalizationInputs.

TextNormalizationOutputs Objects

class TextNormalizationOutputs(Outputs)

[view_source]

Output parameters for TextNormalization.

__init__

def __init__(node=None)

[view_source]

Initialize TextNormalizationOutputs.

TextNormalization Objects

class TextNormalization(AssetNode[TextNormalizationInputs,
TextNormalizationOutputs])

[view_source]

TextNormalization node.

Converts unstructured or non-standard textual data into a more readable and uniform format, dealing with abbreviations, numerals, and other non-standard words.

InputType: text OutputType: label

ParaphrasingInputs Objects

class ParaphrasingInputs(Inputs)

[view_source]

Input parameters for Paraphrasing.

__init__

def __init__(node=None)

[view_source]

Initialize ParaphrasingInputs.

ParaphrasingOutputs Objects

class ParaphrasingOutputs(Outputs)

[view_source]

Output parameters for Paraphrasing.

__init__

def __init__(node=None)

[view_source]

Initialize ParaphrasingOutputs.

Paraphrasing Objects

class Paraphrasing(AssetNode[ParaphrasingInputs, ParaphrasingOutputs])

[view_source]

Paraphrasing node.

Express the meaning of the writer or speaker or something written or spoken using different words.

InputType: text OutputType: text

LanguageIdentificationInputs Objects

class LanguageIdentificationInputs(Inputs)

[view_source]

Input parameters for LanguageIdentification.

__init__

def __init__(node=None)

[view_source]

Initialize LanguageIdentificationInputs.

LanguageIdentificationOutputs Objects

class LanguageIdentificationOutputs(Outputs)

[view_source]

Output parameters for LanguageIdentification.

__init__

def __init__(node=None)

[view_source]

Initialize LanguageIdentificationOutputs.

LanguageIdentification Objects

class LanguageIdentification(AssetNode[LanguageIdentificationInputs,
LanguageIdentificationOutputs])

[view_source]

LanguageIdentification node.

Detects the language in which a given text is written, aiding in multilingual platforms or content localization.

InputType: text OutputType: text

BenchmarkScoringAsrInputs Objects

class BenchmarkScoringAsrInputs(Inputs)

[view_source]

Input parameters for BenchmarkScoringAsr.

__init__

def __init__(node=None)

[view_source]

Initialize BenchmarkScoringAsrInputs.

BenchmarkScoringAsrOutputs Objects

class BenchmarkScoringAsrOutputs(Outputs)

[view_source]

Output parameters for BenchmarkScoringAsr.

__init__

def __init__(node=None)

[view_source]

Initialize BenchmarkScoringAsrOutputs.

BenchmarkScoringAsr Objects

class BenchmarkScoringAsr(AssetNode[BenchmarkScoringAsrInputs,
BenchmarkScoringAsrOutputs])

[view_source]

BenchmarkScoringAsr node.

Benchmark Scoring ASR is a function that evaluates and compares the performance of automatic speech recognition systems by analyzing their accuracy, speed, and other relevant metrics against a standardized set of benchmarks.

InputType: audio OutputType: label

MultiClassTextClassificationInputs Objects

class MultiClassTextClassificationInputs(Inputs)

[view_source]

Input parameters for MultiClassTextClassification.

__init__

def __init__(node=None)

[view_source]

Initialize MultiClassTextClassificationInputs.

MultiClassTextClassificationOutputs Objects

class MultiClassTextClassificationOutputs(Outputs)

[view_source]

Output parameters for MultiClassTextClassification.

__init__

def __init__(node=None)

[view_source]

Initialize MultiClassTextClassificationOutputs.

MultiClassTextClassification Objects

class MultiClassTextClassification(
AssetNode[MultiClassTextClassificationInputs,
MultiClassTextClassificationOutputs])

[view_source]

MultiClassTextClassification node.

Multi Class Text Classification is a natural language processing task that involves categorizing a given text into one of several predefined classes or categories based on its content.

InputType: text OutputType: label

SpeechEmbeddingInputs Objects

class SpeechEmbeddingInputs(Inputs)

[view_source]

Input parameters for SpeechEmbedding.

__init__

def __init__(node=None)

[view_source]

Initialize SpeechEmbeddingInputs.

SpeechEmbeddingOutputs Objects

class SpeechEmbeddingOutputs(Outputs)

[view_source]

Output parameters for SpeechEmbedding.

__init__

def __init__(node=None)

[view_source]

Initialize SpeechEmbeddingOutputs.

SpeechEmbedding Objects

class SpeechEmbedding(AssetNode[SpeechEmbeddingInputs,
SpeechEmbeddingOutputs])

[view_source]

SpeechEmbedding node.

Transforms spoken content into a fixed-size vector in a high-dimensional space that captures the content's essence. Facilitates tasks like speech recognition and speaker verification.

InputType: audio OutputType: text

DocumentImageParsingInputs Objects

class DocumentImageParsingInputs(Inputs)

[view_source]

Input parameters for DocumentImageParsing.

__init__

def __init__(node=None)

[view_source]

Initialize DocumentImageParsingInputs.

DocumentImageParsingOutputs Objects

class DocumentImageParsingOutputs(Outputs)

[view_source]

Output parameters for DocumentImageParsing.

__init__

def __init__(node=None)

[view_source]

Initialize DocumentImageParsingOutputs.

DocumentImageParsing Objects

class DocumentImageParsing(AssetNode[DocumentImageParsingInputs,
DocumentImageParsingOutputs])

[view_source]

DocumentImageParsing node.

Document Image Parsing is the process of analyzing and converting scanned or photographed images of documents into structured, machine-readable formats by identifying and extracting text, layout, and other relevant information.

InputType: image OutputType: text

TranslationInputs Objects

class TranslationInputs(Inputs)

[view_source]

Input parameters for Translation.

__init__

def __init__(node=None)

[view_source]

Initialize TranslationInputs.

TranslationOutputs Objects

class TranslationOutputs(Outputs)

[view_source]

Output parameters for Translation.

__init__

def __init__(node=None)

[view_source]

Initialize TranslationOutputs.

Translation Objects

class Translation(AssetNode[TranslationInputs, TranslationOutputs])

[view_source]

Translation node.

Converts text from one language to another while maintaining the original message's essence and context. Crucial for global communication.

InputType: text OutputType: text

AudioSourceSeparationInputs Objects

class AudioSourceSeparationInputs(Inputs)

[view_source]

Input parameters for AudioSourceSeparation.

__init__

def __init__(node=None)

[view_source]

Initialize AudioSourceSeparationInputs.

AudioSourceSeparationOutputs Objects

class AudioSourceSeparationOutputs(Outputs)

[view_source]

Output parameters for AudioSourceSeparation.

__init__

def __init__(node=None)

[view_source]

Initialize AudioSourceSeparationOutputs.

AudioSourceSeparation Objects

class AudioSourceSeparation(AssetNode[AudioSourceSeparationInputs,
AudioSourceSeparationOutputs])

[view_source]

AudioSourceSeparation node.

Audio Source Separation is the process of separating a mixture (e.g. a pop band recording) into isolated sounds from individual sources (e.g. just the lead vocals).

InputType: audio OutputType: audio

SpeechRecognitionInputs Objects

class SpeechRecognitionInputs(Inputs)

[view_source]

Input parameters for SpeechRecognition.

__init__

def __init__(node=None)

[view_source]

Initialize SpeechRecognitionInputs.

SpeechRecognitionOutputs Objects

class SpeechRecognitionOutputs(Outputs)

[view_source]

Output parameters for SpeechRecognition.

__init__

def __init__(node=None)

[view_source]

Initialize SpeechRecognitionOutputs.

SpeechRecognition Objects

class SpeechRecognition(AssetNode[SpeechRecognitionInputs,
SpeechRecognitionOutputs])

[view_source]

SpeechRecognition node.

Converts spoken language into written text. Useful for transcription services, voice assistants, and applications requiring voice-to-text capabilities.

InputType: audio OutputType: text

KeywordSpottingInputs Objects

class KeywordSpottingInputs(Inputs)

[view_source]

Input parameters for KeywordSpotting.

__init__

def __init__(node=None)

[view_source]

Initialize KeywordSpottingInputs.

KeywordSpottingOutputs Objects

class KeywordSpottingOutputs(Outputs)

[view_source]

Output parameters for KeywordSpotting.

__init__

def __init__(node=None)

[view_source]

Initialize KeywordSpottingOutputs.

KeywordSpotting Objects

class KeywordSpotting(AssetNode[KeywordSpottingInputs,
KeywordSpottingOutputs])

[view_source]

KeywordSpotting node.

Keyword Spotting is a function that enables the detection and identification of specific words or phrases within a stream of audio, often used in voice- activated systems to trigger actions or commands based on recognized keywords.

InputType: audio OutputType: label

PartOfSpeechTaggingInputs Objects

class PartOfSpeechTaggingInputs(Inputs)

[view_source]

Input parameters for PartOfSpeechTagging.

__init__

def __init__(node=None)

[view_source]

Initialize PartOfSpeechTaggingInputs.

PartOfSpeechTaggingOutputs Objects

class PartOfSpeechTaggingOutputs(Outputs)

[view_source]

Output parameters for PartOfSpeechTagging.

__init__

def __init__(node=None)

[view_source]

Initialize PartOfSpeechTaggingOutputs.

PartOfSpeechTagging Objects

class PartOfSpeechTagging(AssetNode[PartOfSpeechTaggingInputs,
PartOfSpeechTaggingOutputs])

[view_source]

PartOfSpeechTagging node.

Part of Speech Tagging is a natural language processing task that involves assigning each word in a sentence its corresponding part of speech, such as noun, verb, adjective, or adverb, based on its role and context within the sentence.

InputType: text OutputType: label

ReferencelessAudioGenerationMetricInputs Objects

class ReferencelessAudioGenerationMetricInputs(Inputs)

[view_source]

Input parameters for ReferencelessAudioGenerationMetric.

__init__

def __init__(node=None)

[view_source]

Initialize ReferencelessAudioGenerationMetricInputs.

ReferencelessAudioGenerationMetricOutputs Objects

class ReferencelessAudioGenerationMetricOutputs(Outputs)

[view_source]

Output parameters for ReferencelessAudioGenerationMetric.

__init__

def __init__(node=None)

[view_source]

Initialize ReferencelessAudioGenerationMetricOutputs.

ReferencelessAudioGenerationMetric Objects

class ReferencelessAudioGenerationMetric(
BaseMetric[ReferencelessAudioGenerationMetricInputs,
ReferencelessAudioGenerationMetricOutputs])

[view_source]

ReferencelessAudioGenerationMetric node.

The Referenceless Audio Generation Metric is a tool designed to evaluate the quality of generated audio content without the need for a reference or original audio sample for comparison.

InputType: text OutputType: text

VoiceActivityDetectionInputs Objects

class VoiceActivityDetectionInputs(Inputs)

[view_source]

Input parameters for VoiceActivityDetection.

__init__

def __init__(node=None)

[view_source]

Initialize VoiceActivityDetectionInputs.

VoiceActivityDetectionOutputs Objects

class VoiceActivityDetectionOutputs(Outputs)

[view_source]

Output parameters for VoiceActivityDetection.

__init__

def __init__(node=None)

[view_source]

Initialize VoiceActivityDetectionOutputs.

VoiceActivityDetection Objects

class VoiceActivityDetection(BaseSegmentor[VoiceActivityDetectionInputs,
VoiceActivityDetectionOutputs])

[view_source]

VoiceActivityDetection node.

Determines when a person is speaking in an audio clip. It's an essential preprocessing step for other audio-related tasks.

InputType: audio OutputType: audio

SentimentAnalysisInputs Objects

class SentimentAnalysisInputs(Inputs)

[view_source]

Input parameters for SentimentAnalysis.

__init__

def __init__(node=None)

[view_source]

Initialize SentimentAnalysisInputs.

SentimentAnalysisOutputs Objects

class SentimentAnalysisOutputs(Outputs)

[view_source]

Output parameters for SentimentAnalysis.

__init__

def __init__(node=None)

[view_source]

Initialize SentimentAnalysisOutputs.

SentimentAnalysis Objects

class SentimentAnalysis(AssetNode[SentimentAnalysisInputs,
SentimentAnalysisOutputs])

[view_source]

SentimentAnalysis node.

Determines the sentiment or emotion (e.g., positive, negative, neutral) of a piece of text, aiding in understanding user feedback or market sentiment.

InputType: text OutputType: label

SubtitlingInputs Objects

class SubtitlingInputs(Inputs)

[view_source]

Input parameters for Subtitling.

__init__

def __init__(node=None)

[view_source]

Initialize SubtitlingInputs.

SubtitlingOutputs Objects

class SubtitlingOutputs(Outputs)

[view_source]

Output parameters for Subtitling.

__init__

def __init__(node=None)

[view_source]

Initialize SubtitlingOutputs.

Subtitling Objects

class Subtitling(AssetNode[SubtitlingInputs, SubtitlingOutputs])

[view_source]

Subtitling node.

Generates accurate subtitles for videos, enhancing accessibility for diverse audiences.

InputType: audio OutputType: text

MultiLabelTextClassificationInputs Objects

class MultiLabelTextClassificationInputs(Inputs)

[view_source]

Input parameters for MultiLabelTextClassification.

__init__

def __init__(node=None)

[view_source]

Initialize MultiLabelTextClassificationInputs.

MultiLabelTextClassificationOutputs Objects

class MultiLabelTextClassificationOutputs(Outputs)

[view_source]

Output parameters for MultiLabelTextClassification.

__init__

def __init__(node=None)

[view_source]

Initialize MultiLabelTextClassificationOutputs.

MultiLabelTextClassification Objects

class MultiLabelTextClassification(
AssetNode[MultiLabelTextClassificationInputs,
MultiLabelTextClassificationOutputs])

[view_source]

MultiLabelTextClassification node.

Multi Label Text Classification is a natural language processing task where a given text is analyzed and assigned multiple relevant labels or categories from a predefined set, allowing for the text to belong to more than one category simultaneously.

InputType: text OutputType: label

VisemeGenerationInputs Objects

class VisemeGenerationInputs(Inputs)

[view_source]

Input parameters for VisemeGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize VisemeGenerationInputs.

VisemeGenerationOutputs Objects

class VisemeGenerationOutputs(Outputs)

[view_source]

Output parameters for VisemeGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize VisemeGenerationOutputs.

VisemeGeneration Objects

class VisemeGeneration(AssetNode[VisemeGenerationInputs,
VisemeGenerationOutputs])

[view_source]

VisemeGeneration node.

Viseme Generation is the process of creating visual representations of phonemes, which are the distinct units of sound in speech, to synchronize lip movements with spoken words in animations or virtual avatars.

InputType: text OutputType: label

TextSegmenationInputs Objects

class TextSegmenationInputs(Inputs)

[view_source]

Input parameters for TextSegmenation.

__init__

def __init__(node=None)

[view_source]

Initialize TextSegmenationInputs.

TextSegmenationOutputs Objects

class TextSegmenationOutputs(Outputs)

[view_source]

Output parameters for TextSegmenation.

__init__

def __init__(node=None)

[view_source]

Initialize TextSegmenationOutputs.

TextSegmenation Objects

class TextSegmenation(AssetNode[TextSegmenationInputs,
TextSegmenationOutputs])

[view_source]

TextSegmenation node.

Text Segmentation is the process of dividing a continuous text into meaningful units, such as words, sentences, or topics, to facilitate easier analysis and understanding.

InputType: text OutputType: text

ZeroShotClassificationInputs Objects

class ZeroShotClassificationInputs(Inputs)

[view_source]

Input parameters for ZeroShotClassification.

__init__

def __init__(node=None)

[view_source]

Initialize ZeroShotClassificationInputs.

ZeroShotClassificationOutputs Objects

class ZeroShotClassificationOutputs(Outputs)

[view_source]

Output parameters for ZeroShotClassification.

__init__

def __init__(node=None)

[view_source]

Initialize ZeroShotClassificationOutputs.

ZeroShotClassification Objects

class ZeroShotClassification(AssetNode[ZeroShotClassificationInputs,
ZeroShotClassificationOutputs])

[view_source]

ZeroShotClassification node.

InputType: text OutputType: text

TextGenerationInputs Objects

class TextGenerationInputs(Inputs)

[view_source]

Input parameters for TextGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize TextGenerationInputs.

TextGenerationOutputs Objects

class TextGenerationOutputs(Outputs)

[view_source]

Output parameters for TextGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize TextGenerationOutputs.

TextGeneration Objects

class TextGeneration(AssetNode[TextGenerationInputs, TextGenerationOutputs])

[view_source]

TextGeneration node.

Creates coherent and contextually relevant textual content based on prompts or certain parameters. Useful for chatbots, content creation, and data augmentation.

InputType: text OutputType: text

AudioIntentDetectionInputs Objects

class AudioIntentDetectionInputs(Inputs)

[view_source]

Input parameters for AudioIntentDetection.

__init__

def __init__(node=None)

[view_source]

Initialize AudioIntentDetectionInputs.

AudioIntentDetectionOutputs Objects

class AudioIntentDetectionOutputs(Outputs)

[view_source]

Output parameters for AudioIntentDetection.

__init__

def __init__(node=None)

[view_source]

Initialize AudioIntentDetectionOutputs.

AudioIntentDetection Objects

class AudioIntentDetection(AssetNode[AudioIntentDetectionInputs,
AudioIntentDetectionOutputs])

[view_source]

AudioIntentDetection node.

Audio Intent Detection is a process that involves analyzing audio signals to identify and interpret the underlying intentions or purposes behind spoken words, enabling systems to understand and respond appropriately to human speech.

InputType: audio OutputType: label

EntityLinkingInputs Objects

class EntityLinkingInputs(Inputs)

[view_source]

Input parameters for EntityLinking.

__init__

def __init__(node=None)

[view_source]

Initialize EntityLinkingInputs.

EntityLinkingOutputs Objects

class EntityLinkingOutputs(Outputs)

[view_source]

Output parameters for EntityLinking.

__init__

def __init__(node=None)

[view_source]

Initialize EntityLinkingOutputs.

EntityLinking Objects

class EntityLinking(AssetNode[EntityLinkingInputs, EntityLinkingOutputs])

[view_source]

EntityLinking node.

Associates identified entities in the text with specific entries in a knowledge base or database.

InputType: text OutputType: label

ConnectionInputs Objects

class ConnectionInputs(Inputs)

[view_source]

Input parameters for Connection.

__init__

def __init__(node=None)

[view_source]

Initialize ConnectionInputs.

ConnectionOutputs Objects

class ConnectionOutputs(Outputs)

[view_source]

Output parameters for Connection.

__init__

def __init__(node=None)

[view_source]

Initialize ConnectionOutputs.

Connection Objects

class Connection(AssetNode[ConnectionInputs, ConnectionOutputs])

[view_source]

Connection node.

Connections are integration that allow you to connect your AI agents to external tools

InputType: text OutputType: text

VisualQuestionAnsweringInputs Objects

class VisualQuestionAnsweringInputs(Inputs)

[view_source]

Input parameters for VisualQuestionAnswering.

__init__

def __init__(node=None)

[view_source]

Initialize VisualQuestionAnsweringInputs.

VisualQuestionAnsweringOutputs Objects

class VisualQuestionAnsweringOutputs(Outputs)

[view_source]

Output parameters for VisualQuestionAnswering.

__init__

def __init__(node=None)

[view_source]

Initialize VisualQuestionAnsweringOutputs.

VisualQuestionAnswering Objects

class VisualQuestionAnswering(AssetNode[VisualQuestionAnsweringInputs,
VisualQuestionAnsweringOutputs])

[view_source]

VisualQuestionAnswering node.

Visual Question Answering (VQA) is a task in artificial intelligence that involves analyzing an image and providing accurate, contextually relevant answers to questions posed about the visual content of that image.

InputType: image OutputType: video

LoglikelihoodInputs Objects

class LoglikelihoodInputs(Inputs)

[view_source]

Input parameters for Loglikelihood.

__init__

def __init__(node=None)

[view_source]

Initialize LoglikelihoodInputs.

LoglikelihoodOutputs Objects

class LoglikelihoodOutputs(Outputs)

[view_source]

Output parameters for Loglikelihood.

__init__

def __init__(node=None)

[view_source]

Initialize LoglikelihoodOutputs.

Loglikelihood Objects

class Loglikelihood(AssetNode[LoglikelihoodInputs, LoglikelihoodOutputs])

[view_source]

Loglikelihood node.

The Log Likelihood function measures the probability of observing the given data under a specific statistical model by taking the natural logarithm of the likelihood function, thereby transforming the product of probabilities into a sum, which simplifies the process of optimization and parameter estimation.

InputType: text OutputType: number

LanguageIdentificationAudioInputs Objects

class LanguageIdentificationAudioInputs(Inputs)

[view_source]

Input parameters for LanguageIdentificationAudio.

__init__

def __init__(node=None)

[view_source]

Initialize LanguageIdentificationAudioInputs.

LanguageIdentificationAudioOutputs Objects

class LanguageIdentificationAudioOutputs(Outputs)

[view_source]

Output parameters for LanguageIdentificationAudio.

__init__

def __init__(node=None)

[view_source]

Initialize LanguageIdentificationAudioOutputs.

LanguageIdentificationAudio Objects

class LanguageIdentificationAudio(AssetNode[LanguageIdentificationAudioInputs,
LanguageIdentificationAudioOutputs]
)

[view_source]

LanguageIdentificationAudio node.

The Language Identification Audio function analyzes audio input to determine and identify the language being spoken.

InputType: audio OutputType: label

FactCheckingInputs Objects

class FactCheckingInputs(Inputs)

[view_source]

Input parameters for FactChecking.

__init__

def __init__(node=None)

[view_source]

Initialize FactCheckingInputs.

FactCheckingOutputs Objects

class FactCheckingOutputs(Outputs)

[view_source]

Output parameters for FactChecking.

__init__

def __init__(node=None)

[view_source]

Initialize FactCheckingOutputs.

FactChecking Objects

class FactChecking(AssetNode[FactCheckingInputs, FactCheckingOutputs])

[view_source]

FactChecking node.

Fact Checking is the process of verifying the accuracy and truthfulness of information, statements, or claims by cross-referencing with reliable sources and evidence.

InputType: text OutputType: label

TableQuestionAnsweringInputs Objects

class TableQuestionAnsweringInputs(Inputs)

[view_source]

Input parameters for TableQuestionAnswering.

__init__

def __init__(node=None)

[view_source]

Initialize TableQuestionAnsweringInputs.

TableQuestionAnsweringOutputs Objects

class TableQuestionAnsweringOutputs(Outputs)

[view_source]

Output parameters for TableQuestionAnswering.

__init__

def __init__(node=None)

[view_source]

Initialize TableQuestionAnsweringOutputs.

TableQuestionAnswering Objects

class TableQuestionAnswering(AssetNode[TableQuestionAnsweringInputs,
TableQuestionAnsweringOutputs])

[view_source]

TableQuestionAnswering node.

The task of question answering over tables is given an input table (or a set of tables) T and a natural language question Q (a user query), output the correct answer A

InputType: text OutputType: text

SpeechClassificationInputs Objects

class SpeechClassificationInputs(Inputs)

[view_source]

Input parameters for SpeechClassification.

__init__

def __init__(node=None)

[view_source]

Initialize SpeechClassificationInputs.

SpeechClassificationOutputs Objects

class SpeechClassificationOutputs(Outputs)

[view_source]

Output parameters for SpeechClassification.

__init__

def __init__(node=None)

[view_source]

Initialize SpeechClassificationOutputs.

SpeechClassification Objects

class SpeechClassification(AssetNode[SpeechClassificationInputs,
SpeechClassificationOutputs])

[view_source]

SpeechClassification node.

Categorizes audio clips based on their content, aiding in content organization and targeted actions.

InputType: audio OutputType: label

InverseTextNormalizationInputs Objects

class InverseTextNormalizationInputs(Inputs)

[view_source]

Input parameters for InverseTextNormalization.

__init__

def __init__(node=None)

[view_source]

Initialize InverseTextNormalizationInputs.

InverseTextNormalizationOutputs Objects

class InverseTextNormalizationOutputs(Outputs)

[view_source]

Output parameters for InverseTextNormalization.

__init__

def __init__(node=None)

[view_source]

Initialize InverseTextNormalizationOutputs.

InverseTextNormalization Objects

class InverseTextNormalization(AssetNode[InverseTextNormalizationInputs,
InverseTextNormalizationOutputs])

[view_source]

InverseTextNormalization node.

Inverse Text Normalization is the process of converting spoken or written language in its normalized form, such as numbers, dates, and abbreviations, back into their original, more complex or detailed textual representations.

InputType: text OutputType: label

MultiClassImageClassificationInputs Objects

class MultiClassImageClassificationInputs(Inputs)

[view_source]

Input parameters for MultiClassImageClassification.

__init__

def __init__(node=None)

[view_source]

Initialize MultiClassImageClassificationInputs.

MultiClassImageClassificationOutputs Objects

class MultiClassImageClassificationOutputs(Outputs)

[view_source]

Output parameters for MultiClassImageClassification.

__init__

def __init__(node=None)

[view_source]

Initialize MultiClassImageClassificationOutputs.

MultiClassImageClassification Objects

class MultiClassImageClassification(
AssetNode[MultiClassImageClassificationInputs,
MultiClassImageClassificationOutputs])

[view_source]

MultiClassImageClassification node.

Multi Class Image Classification is a machine learning task where an algorithm is trained to categorize images into one of several predefined classes or categories based on their visual content.

InputType: image OutputType: label

AsrGenderClassificationInputs Objects

class AsrGenderClassificationInputs(Inputs)

[view_source]

Input parameters for AsrGenderClassification.

__init__

def __init__(node=None)

[view_source]

Initialize AsrGenderClassificationInputs.

AsrGenderClassificationOutputs Objects

class AsrGenderClassificationOutputs(Outputs)

[view_source]

Output parameters for AsrGenderClassification.

__init__

def __init__(node=None)

[view_source]

Initialize AsrGenderClassificationOutputs.

AsrGenderClassification Objects

class AsrGenderClassification(AssetNode[AsrGenderClassificationInputs,
AsrGenderClassificationOutputs])

[view_source]

AsrGenderClassification node.

The ASR Gender Classification function analyzes audio recordings to determine and classify the speaker's gender based on their voice characteristics.

InputType: audio OutputType: label

SummarizationInputs Objects

class SummarizationInputs(Inputs)

[view_source]

Input parameters for Summarization.

__init__

def __init__(node=None)

[view_source]

Initialize SummarizationInputs.

SummarizationOutputs Objects

class SummarizationOutputs(Outputs)

[view_source]

Output parameters for Summarization.

__init__

def __init__(node=None)

[view_source]

Initialize SummarizationOutputs.

Summarization Objects

class Summarization(AssetNode[SummarizationInputs, SummarizationOutputs])

[view_source]

Summarization node.

Text summarization is the process of distilling the most important information from a source (or sources) to produce an abridged version for a particular user (or users) and task (or tasks)

InputType: text OutputType: text

TopicModelingInputs Objects

class TopicModelingInputs(Inputs)

[view_source]

Input parameters for TopicModeling.

__init__

def __init__(node=None)

[view_source]

Initialize TopicModelingInputs.

TopicModelingOutputs Objects

class TopicModelingOutputs(Outputs)

[view_source]

Output parameters for TopicModeling.

__init__

def __init__(node=None)

[view_source]

Initialize TopicModelingOutputs.

TopicModeling Objects

class TopicModeling(AssetNode[TopicModelingInputs, TopicModelingOutputs])

[view_source]

TopicModeling node.

Topic modeling is a type of statistical modeling for discovering the abstract “topics” that occur in a collection of documents.

InputType: text OutputType: label

AudioReconstructionInputs Objects

class AudioReconstructionInputs(Inputs)

[view_source]

Input parameters for AudioReconstruction.

__init__

def __init__(node=None)

[view_source]

Initialize AudioReconstructionInputs.

AudioReconstructionOutputs Objects

class AudioReconstructionOutputs(Outputs)

[view_source]

Output parameters for AudioReconstruction.

__init__

def __init__(node=None)

[view_source]

Initialize AudioReconstructionOutputs.

AudioReconstruction Objects

class AudioReconstruction(BaseReconstructor[AudioReconstructionInputs,
AudioReconstructionOutputs])

[view_source]

AudioReconstruction node.

Audio Reconstruction is the process of restoring or recreating audio signals from incomplete, damaged, or degraded recordings to achieve a high-quality, accurate representation of the original sound.

InputType: audio OutputType: audio

TextEmbeddingInputs Objects

class TextEmbeddingInputs(Inputs)

[view_source]

Input parameters for TextEmbedding.

__init__

def __init__(node=None)

[view_source]

Initialize TextEmbeddingInputs.

TextEmbeddingOutputs Objects

class TextEmbeddingOutputs(Outputs)

[view_source]

Output parameters for TextEmbedding.

__init__

def __init__(node=None)

[view_source]

Initialize TextEmbeddingOutputs.

TextEmbedding Objects

class TextEmbedding(AssetNode[TextEmbeddingInputs, TextEmbeddingOutputs])

[view_source]

TextEmbedding node.

Text embedding is a process that converts text into numerical vectors, capturing the semantic meaning and contextual relationships of words or phrases, enabling machines to understand and analyze natural language more effectively.

InputType: text OutputType: text

DetectLanguageFromTextInputs Objects

class DetectLanguageFromTextInputs(Inputs)

[view_source]

Input parameters for DetectLanguageFromText.

__init__

def __init__(node=None)

[view_source]

Initialize DetectLanguageFromTextInputs.

DetectLanguageFromTextOutputs Objects

class DetectLanguageFromTextOutputs(Outputs)

[view_source]

Output parameters for DetectLanguageFromText.

__init__

def __init__(node=None)

[view_source]

Initialize DetectLanguageFromTextOutputs.

DetectLanguageFromText Objects

class DetectLanguageFromText(AssetNode[DetectLanguageFromTextInputs,
DetectLanguageFromTextOutputs])

[view_source]

DetectLanguageFromText node.

Detect Language From Text

InputType: text OutputType: label

ExtractAudioFromVideoInputs Objects

class ExtractAudioFromVideoInputs(Inputs)

[view_source]

Input parameters for ExtractAudioFromVideo.

__init__

def __init__(node=None)

[view_source]

Initialize ExtractAudioFromVideoInputs.

ExtractAudioFromVideoOutputs Objects

class ExtractAudioFromVideoOutputs(Outputs)

[view_source]

Output parameters for ExtractAudioFromVideo.

__init__

def __init__(node=None)

[view_source]

Initialize ExtractAudioFromVideoOutputs.

ExtractAudioFromVideo Objects

class ExtractAudioFromVideo(AssetNode[ExtractAudioFromVideoInputs,
ExtractAudioFromVideoOutputs])

[view_source]

ExtractAudioFromVideo node.

Isolates and extracts audio tracks from video files, aiding in audio analysis or transcription tasks.

InputType: video OutputType: audio

SceneDetectionInputs Objects

class SceneDetectionInputs(Inputs)

[view_source]

Input parameters for SceneDetection.

__init__

def __init__(node=None)

[view_source]

Initialize SceneDetectionInputs.

SceneDetectionOutputs Objects

class SceneDetectionOutputs(Outputs)

[view_source]

Output parameters for SceneDetection.

__init__

def __init__(node=None)

[view_source]

Initialize SceneDetectionOutputs.

SceneDetection Objects

class SceneDetection(AssetNode[SceneDetectionInputs, SceneDetectionOutputs])

[view_source]

SceneDetection node.

Scene detection is used for detecting transitions between shots in a video to split it into basic temporal segments.

InputType: image OutputType: text

TextToImageGenerationInputs Objects

class TextToImageGenerationInputs(Inputs)

[view_source]

Input parameters for TextToImageGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize TextToImageGenerationInputs.

TextToImageGenerationOutputs Objects

class TextToImageGenerationOutputs(Outputs)

[view_source]

Output parameters for TextToImageGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize TextToImageGenerationOutputs.

TextToImageGeneration Objects

class TextToImageGeneration(AssetNode[TextToImageGenerationInputs,
TextToImageGenerationOutputs])

[view_source]

TextToImageGeneration node.

Creates a visual representation based on textual input, turning descriptions into pictorial forms. Used in creative processes and content generation.

InputType: text OutputType: image

AutoMaskGenerationInputs Objects

class AutoMaskGenerationInputs(Inputs)

[view_source]

Input parameters for AutoMaskGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize AutoMaskGenerationInputs.

AutoMaskGenerationOutputs Objects

class AutoMaskGenerationOutputs(Outputs)

[view_source]

Output parameters for AutoMaskGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize AutoMaskGenerationOutputs.

AutoMaskGeneration Objects

class AutoMaskGeneration(AssetNode[AutoMaskGenerationInputs,
AutoMaskGenerationOutputs])

[view_source]

AutoMaskGeneration node.

Auto-mask generation refers to the automated process of creating masks in image processing or computer vision, typically for segmentation tasks. A mask is a binary or multi-class image that labels different parts of an image, usually separating the foreground (objects of interest) from the background, or identifying specific object classes in an image.

InputType: image OutputType: label

AudioLanguageIdentificationInputs Objects

class AudioLanguageIdentificationInputs(Inputs)

[view_source]

Input parameters for AudioLanguageIdentification.

__init__

def __init__(node=None)

[view_source]

Initialize AudioLanguageIdentificationInputs.

AudioLanguageIdentificationOutputs Objects

class AudioLanguageIdentificationOutputs(Outputs)

[view_source]

Output parameters for AudioLanguageIdentification.

__init__

def __init__(node=None)

[view_source]

Initialize AudioLanguageIdentificationOutputs.

AudioLanguageIdentification Objects

class AudioLanguageIdentification(AssetNode[AudioLanguageIdentificationInputs,
AudioLanguageIdentificationOutputs]
)

[view_source]

AudioLanguageIdentification node.

Audio Language Identification is a process that involves analyzing an audio recording to determine the language being spoken.

InputType: audio OutputType: label

FacialRecognitionInputs Objects

class FacialRecognitionInputs(Inputs)

[view_source]

Input parameters for FacialRecognition.

__init__

def __init__(node=None)

[view_source]

Initialize FacialRecognitionInputs.

FacialRecognitionOutputs Objects

class FacialRecognitionOutputs(Outputs)

[view_source]

Output parameters for FacialRecognition.

__init__

def __init__(node=None)

[view_source]

Initialize FacialRecognitionOutputs.

FacialRecognition Objects

class FacialRecognition(AssetNode[FacialRecognitionInputs,
FacialRecognitionOutputs])

[view_source]

FacialRecognition node.

A facial recognition system is a technology capable of matching a human face from a digital image or a video frame against a database of faces

InputType: image OutputType: label

QuestionAnsweringInputs Objects

class QuestionAnsweringInputs(Inputs)

[view_source]

Input parameters for QuestionAnswering.

__init__

def __init__(node=None)

[view_source]

Initialize QuestionAnsweringInputs.

QuestionAnsweringOutputs Objects

class QuestionAnsweringOutputs(Outputs)

[view_source]

Output parameters for QuestionAnswering.

__init__

def __init__(node=None)

[view_source]

Initialize QuestionAnsweringOutputs.

QuestionAnswering Objects

class QuestionAnswering(AssetNode[QuestionAnsweringInputs,
QuestionAnsweringOutputs])

[view_source]

QuestionAnswering node.

building systems that automatically answer questions posed by humans in a natural language usually from a given text

InputType: text OutputType: text

ImageImpaintingInputs Objects

class ImageImpaintingInputs(Inputs)

[view_source]

Input parameters for ImageImpainting.

__init__

def __init__(node=None)

[view_source]

Initialize ImageImpaintingInputs.

ImageImpaintingOutputs Objects

class ImageImpaintingOutputs(Outputs)

[view_source]

Output parameters for ImageImpainting.

__init__

def __init__(node=None)

[view_source]

Initialize ImageImpaintingOutputs.

ImageImpainting Objects

class ImageImpainting(AssetNode[ImageImpaintingInputs,
ImageImpaintingOutputs])

[view_source]

ImageImpainting node.

Image inpainting is a process that involves filling in missing or damaged parts of an image in a way that is visually coherent and seamlessly blends with the surrounding areas, often using advanced algorithms and techniques to restore the image to its original or intended appearance.

InputType: image OutputType: image

TextReconstructionInputs Objects

class TextReconstructionInputs(Inputs)

[view_source]

Input parameters for TextReconstruction.

__init__

def __init__(node=None)

[view_source]

Initialize TextReconstructionInputs.

TextReconstructionOutputs Objects

class TextReconstructionOutputs(Outputs)

[view_source]

Output parameters for TextReconstruction.

__init__

def __init__(node=None)

[view_source]

Initialize TextReconstructionOutputs.

TextReconstruction Objects

class TextReconstruction(BaseReconstructor[TextReconstructionInputs,
TextReconstructionOutputs])

[view_source]

TextReconstruction node.

Text Reconstruction is a process that involves piecing together fragmented or incomplete text data to restore it to its original, coherent form.

InputType: text OutputType: text

ScriptExecutionInputs Objects

class ScriptExecutionInputs(Inputs)

[view_source]

Input parameters for ScriptExecution.

__init__

def __init__(node=None)

[view_source]

Initialize ScriptExecutionInputs.

ScriptExecutionOutputs Objects

class ScriptExecutionOutputs(Outputs)

[view_source]

Output parameters for ScriptExecution.

__init__

def __init__(node=None)

[view_source]

Initialize ScriptExecutionOutputs.

ScriptExecution Objects

class ScriptExecution(AssetNode[ScriptExecutionInputs,
ScriptExecutionOutputs])

[view_source]

ScriptExecution node.

Script Execution refers to the process of running a set of programmed instructions or code within a computing environment, enabling the automated performance of tasks, calculations, or operations as defined by the script.

InputType: text OutputType: text

SemanticSegmentationInputs Objects

class SemanticSegmentationInputs(Inputs)

[view_source]

Input parameters for SemanticSegmentation.

__init__

def __init__(node=None)

[view_source]

Initialize SemanticSegmentationInputs.

SemanticSegmentationOutputs Objects

class SemanticSegmentationOutputs(Outputs)

[view_source]

Output parameters for SemanticSegmentation.

__init__

def __init__(node=None)

[view_source]

Initialize SemanticSegmentationOutputs.

SemanticSegmentation Objects

class SemanticSegmentation(AssetNode[SemanticSegmentationInputs,
SemanticSegmentationOutputs])

[view_source]

SemanticSegmentation node.

Semantic segmentation is a computer vision process that involves classifying each pixel in an image into a predefined category, effectively partitioning the image into meaningful segments based on the objects or regions they represent.

InputType: image OutputType: label

AudioEmotionDetectionInputs Objects

class AudioEmotionDetectionInputs(Inputs)

[view_source]

Input parameters for AudioEmotionDetection.

__init__

def __init__(node=None)

[view_source]

Initialize AudioEmotionDetectionInputs.

AudioEmotionDetectionOutputs Objects

class AudioEmotionDetectionOutputs(Outputs)

[view_source]

Output parameters for AudioEmotionDetection.

__init__

def __init__(node=None)

[view_source]

Initialize AudioEmotionDetectionOutputs.

AudioEmotionDetection Objects

class AudioEmotionDetection(AssetNode[AudioEmotionDetectionInputs,
AudioEmotionDetectionOutputs])

[view_source]

AudioEmotionDetection node.

Audio Emotion Detection is a technology that analyzes vocal characteristics and patterns in audio recordings to identify and classify the emotional state of the speaker.

InputType: audio OutputType: label

ImageCaptioningInputs Objects

class ImageCaptioningInputs(Inputs)

[view_source]

Input parameters for ImageCaptioning.

__init__

def __init__(node=None)

[view_source]

Initialize ImageCaptioningInputs.

ImageCaptioningOutputs Objects

class ImageCaptioningOutputs(Outputs)

[view_source]

Output parameters for ImageCaptioning.

__init__

def __init__(node=None)

[view_source]

Initialize ImageCaptioningOutputs.

ImageCaptioning Objects

class ImageCaptioning(AssetNode[ImageCaptioningInputs,
ImageCaptioningOutputs])

[view_source]

ImageCaptioning node.

Image Captioning is a process that involves generating a textual description of an image, typically using machine learning models to analyze the visual content and produce coherent and contextually relevant sentences that describe the objects, actions, and scenes depicted in the image.

InputType: image OutputType: text

SplitOnLinebreakInputs Objects

class SplitOnLinebreakInputs(Inputs)

[view_source]

Input parameters for SplitOnLinebreak.

__init__

def __init__(node=None)

[view_source]

Initialize SplitOnLinebreakInputs.

SplitOnLinebreakOutputs Objects

class SplitOnLinebreakOutputs(Outputs)

[view_source]

Output parameters for SplitOnLinebreak.

__init__

def __init__(node=None)

[view_source]

Initialize SplitOnLinebreakOutputs.

SplitOnLinebreak Objects

class SplitOnLinebreak(BaseSegmentor[SplitOnLinebreakInputs,
SplitOnLinebreakOutputs])

[view_source]

SplitOnLinebreak node.

The "Split On Linebreak" function divides a given string into a list of substrings, using linebreaks (newline characters) as the points of separation.

InputType: text OutputType: text

StyleTransferInputs Objects

class StyleTransferInputs(Inputs)

[view_source]

Input parameters for StyleTransfer.

__init__

def __init__(node=None)

[view_source]

Initialize StyleTransferInputs.

StyleTransferOutputs Objects

class StyleTransferOutputs(Outputs)

[view_source]

Output parameters for StyleTransfer.

__init__

def __init__(node=None)

[view_source]

Initialize StyleTransferOutputs.

StyleTransfer Objects

class StyleTransfer(AssetNode[StyleTransferInputs, StyleTransferOutputs])

[view_source]

StyleTransfer node.

Style Transfer is a technique in artificial intelligence that applies the visual style of one image (such as the brushstrokes of a famous painting) to the content of another image, effectively blending the artistic elements of the first image with the subject matter of the second.

InputType: image OutputType: image

BaseModelInputs Objects

class BaseModelInputs(Inputs)

[view_source]

Input parameters for BaseModel.

__init__

def __init__(node=None)

[view_source]

Initialize BaseModelInputs.

BaseModelOutputs Objects

class BaseModelOutputs(Outputs)

[view_source]

Output parameters for BaseModel.

__init__

def __init__(node=None)

[view_source]

Initialize BaseModelOutputs.

BaseModel Objects

class BaseModel(AssetNode[BaseModelInputs, BaseModelOutputs])

[view_source]

BaseModel node.

The Base-Model function serves as a foundational framework designed to provide essential features and capabilities upon which more specialized or advanced models can be built and customized.

InputType: text OutputType: text

ImageManipulationInputs Objects

class ImageManipulationInputs(Inputs)

[view_source]

Input parameters for ImageManipulation.

__init__

def __init__(node=None)

[view_source]

Initialize ImageManipulationInputs.

ImageManipulationOutputs Objects

class ImageManipulationOutputs(Outputs)

[view_source]

Output parameters for ImageManipulation.

__init__

def __init__(node=None)

[view_source]

Initialize ImageManipulationOutputs.

ImageManipulation Objects

class ImageManipulation(AssetNode[ImageManipulationInputs,
ImageManipulationOutputs])

[view_source]

ImageManipulation node.

Image Manipulation refers to the process of altering or enhancing digital images using various techniques and tools to achieve desired visual effects, correct imperfections, or transform the image's appearance.

InputType: image OutputType: image

VideoEmbeddingInputs Objects

class VideoEmbeddingInputs(Inputs)

[view_source]

Input parameters for VideoEmbedding.

__init__

def __init__(node=None)

[view_source]

Initialize VideoEmbeddingInputs.

VideoEmbeddingOutputs Objects

class VideoEmbeddingOutputs(Outputs)

[view_source]

Output parameters for VideoEmbedding.

__init__

def __init__(node=None)

[view_source]

Initialize VideoEmbeddingOutputs.

VideoEmbedding Objects

class VideoEmbedding(AssetNode[VideoEmbeddingInputs, VideoEmbeddingOutputs])

[view_source]

VideoEmbedding node.

Video Embedding is a process that transforms video content into a fixed- dimensional vector representation, capturing essential features and patterns to facilitate tasks such as retrieval, classification, and recommendation.

InputType: video OutputType: embedding

DialectDetectionInputs Objects

class DialectDetectionInputs(Inputs)

[view_source]

Input parameters for DialectDetection.

__init__

def __init__(node=None)

[view_source]

Initialize DialectDetectionInputs.

DialectDetectionOutputs Objects

class DialectDetectionOutputs(Outputs)

[view_source]

Output parameters for DialectDetection.

__init__

def __init__(node=None)

[view_source]

Initialize DialectDetectionOutputs.

DialectDetection Objects

class DialectDetection(AssetNode[DialectDetectionInputs,
DialectDetectionOutputs])

[view_source]

DialectDetection node.

Identifies specific dialects within a language, aiding in localized content creation or user experience personalization.

InputType: audio OutputType: text

FillTextMaskInputs Objects

class FillTextMaskInputs(Inputs)

[view_source]

Input parameters for FillTextMask.

__init__

def __init__(node=None)

[view_source]

Initialize FillTextMaskInputs.

FillTextMaskOutputs Objects

class FillTextMaskOutputs(Outputs)

[view_source]

Output parameters for FillTextMask.

__init__

def __init__(node=None)

[view_source]

Initialize FillTextMaskOutputs.

FillTextMask Objects

class FillTextMask(AssetNode[FillTextMaskInputs, FillTextMaskOutputs])

[view_source]

FillTextMask node.

Completes missing parts of a text based on the context, ideal for content generation or data augmentation tasks.

InputType: text OutputType: text

ActivityDetectionInputs Objects

class ActivityDetectionInputs(Inputs)

[view_source]

Input parameters for ActivityDetection.

__init__

def __init__(node=None)

[view_source]

Initialize ActivityDetectionInputs.

ActivityDetectionOutputs Objects

class ActivityDetectionOutputs(Outputs)

[view_source]

Output parameters for ActivityDetection.

__init__

def __init__(node=None)

[view_source]

Initialize ActivityDetectionOutputs.

ActivityDetection Objects

class ActivityDetection(AssetNode[ActivityDetectionInputs,
ActivityDetectionOutputs])

[view_source]

ActivityDetection node.

detection of the presence or absence of human speech, used in speech processing.

InputType: audio OutputType: label

SelectSupplierForTranslationInputs Objects

class SelectSupplierForTranslationInputs(Inputs)

[view_source]

Input parameters for SelectSupplierForTranslation.

__init__

def __init__(node=None)

[view_source]

Initialize SelectSupplierForTranslationInputs.

SelectSupplierForTranslationOutputs Objects

class SelectSupplierForTranslationOutputs(Outputs)

[view_source]

Output parameters for SelectSupplierForTranslation.

__init__

def __init__(node=None)

[view_source]

Initialize SelectSupplierForTranslationOutputs.

SelectSupplierForTranslation Objects

class SelectSupplierForTranslation(
AssetNode[SelectSupplierForTranslationInputs,
SelectSupplierForTranslationOutputs])

[view_source]

SelectSupplierForTranslation node.

Supplier For Translation

InputType: text OutputType: label

ExpressionDetectionInputs Objects

class ExpressionDetectionInputs(Inputs)

[view_source]

Input parameters for ExpressionDetection.

__init__

def __init__(node=None)

[view_source]

Initialize ExpressionDetectionInputs.

ExpressionDetectionOutputs Objects

class ExpressionDetectionOutputs(Outputs)

[view_source]

Output parameters for ExpressionDetection.

__init__

def __init__(node=None)

[view_source]

Initialize ExpressionDetectionOutputs.

ExpressionDetection Objects

class ExpressionDetection(AssetNode[ExpressionDetectionInputs,
ExpressionDetectionOutputs])

[view_source]

ExpressionDetection node.

Expression Detection is the process of identifying and analyzing facial expressions to interpret emotions or intentions using AI and computer vision techniques.

InputType: text OutputType: label

VideoGenerationInputs Objects

class VideoGenerationInputs(Inputs)

[view_source]

Input parameters for VideoGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize VideoGenerationInputs.

VideoGenerationOutputs Objects

class VideoGenerationOutputs(Outputs)

[view_source]

Output parameters for VideoGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize VideoGenerationOutputs.

VideoGeneration Objects

class VideoGeneration(AssetNode[VideoGenerationInputs,
VideoGenerationOutputs])

[view_source]

VideoGeneration node.

Produces video content based on specific inputs or datasets. Can be used for simulations, animations, or even deepfake detection.

InputType: text OutputType: video

ImageAnalysisInputs Objects

class ImageAnalysisInputs(Inputs)

[view_source]

Input parameters for ImageAnalysis.

__init__

def __init__(node=None)

[view_source]

Initialize ImageAnalysisInputs.

ImageAnalysisOutputs Objects

class ImageAnalysisOutputs(Outputs)

[view_source]

Output parameters for ImageAnalysis.

__init__

def __init__(node=None)

[view_source]

Initialize ImageAnalysisOutputs.

ImageAnalysis Objects

class ImageAnalysis(AssetNode[ImageAnalysisInputs, ImageAnalysisOutputs])

[view_source]

ImageAnalysis node.

Image analysis is the extraction of meaningful information from images

InputType: image OutputType: label

NoiseRemovalInputs Objects

class NoiseRemovalInputs(Inputs)

[view_source]

Input parameters for NoiseRemoval.

__init__

def __init__(node=None)

[view_source]

Initialize NoiseRemovalInputs.

NoiseRemovalOutputs Objects

class NoiseRemovalOutputs(Outputs)

[view_source]

Output parameters for NoiseRemoval.

__init__

def __init__(node=None)

[view_source]

Initialize NoiseRemovalOutputs.

NoiseRemoval Objects

class NoiseRemoval(AssetNode[NoiseRemovalInputs, NoiseRemovalOutputs])

[view_source]

NoiseRemoval node.

Noise Removal is a process that involves identifying and eliminating unwanted random variations or disturbances from an audio signal to enhance the clarity and quality of the underlying information.

InputType: audio OutputType: audio

ImageAndVideoAnalysisInputs Objects

class ImageAndVideoAnalysisInputs(Inputs)

[view_source]

Input parameters for ImageAndVideoAnalysis.

__init__

def __init__(node=None)

[view_source]

Initialize ImageAndVideoAnalysisInputs.

ImageAndVideoAnalysisOutputs Objects

class ImageAndVideoAnalysisOutputs(Outputs)

[view_source]

Output parameters for ImageAndVideoAnalysis.

__init__

def __init__(node=None)

[view_source]

Initialize ImageAndVideoAnalysisOutputs.

ImageAndVideoAnalysis Objects

class ImageAndVideoAnalysis(AssetNode[ImageAndVideoAnalysisInputs,
ImageAndVideoAnalysisOutputs])

[view_source]

ImageAndVideoAnalysis node.

InputType: image OutputType: text

KeywordExtractionInputs Objects

class KeywordExtractionInputs(Inputs)

[view_source]

Input parameters for KeywordExtraction.

__init__

def __init__(node=None)

[view_source]

Initialize KeywordExtractionInputs.

KeywordExtractionOutputs Objects

class KeywordExtractionOutputs(Outputs)

[view_source]

Output parameters for KeywordExtraction.

__init__

def __init__(node=None)

[view_source]

Initialize KeywordExtractionOutputs.

KeywordExtraction Objects

class KeywordExtraction(AssetNode[KeywordExtractionInputs,
KeywordExtractionOutputs])

[view_source]

KeywordExtraction node.

It helps concise the text and obtain relevant keywords Example use-cases are finding topics of interest from a news article and identifying the problems based on customer reviews and so.

InputType: text OutputType: label

SplitOnSilenceInputs Objects

class SplitOnSilenceInputs(Inputs)

[view_source]

Input parameters for SplitOnSilence.

__init__

def __init__(node=None)

[view_source]

Initialize SplitOnSilenceInputs.

SplitOnSilenceOutputs Objects

class SplitOnSilenceOutputs(Outputs)

[view_source]

Output parameters for SplitOnSilence.

__init__

def __init__(node=None)

[view_source]

Initialize SplitOnSilenceOutputs.

SplitOnSilence Objects

class SplitOnSilence(AssetNode[SplitOnSilenceInputs, SplitOnSilenceOutputs])

[view_source]

SplitOnSilence node.

The "Split On Silence" function divides an audio recording into separate segments based on periods of silence, allowing for easier editing and analysis of individual sections.

InputType: audio OutputType: audio

IntentRecognitionInputs Objects

class IntentRecognitionInputs(Inputs)

[view_source]

Input parameters for IntentRecognition.

__init__

def __init__(node=None)

[view_source]

Initialize IntentRecognitionInputs.

IntentRecognitionOutputs Objects

class IntentRecognitionOutputs(Outputs)

[view_source]

Output parameters for IntentRecognition.

__init__

def __init__(node=None)

[view_source]

Initialize IntentRecognitionOutputs.

IntentRecognition Objects

class IntentRecognition(AssetNode[IntentRecognitionInputs,
IntentRecognitionOutputs])

[view_source]

IntentRecognition node.

classify the user's utterance (provided in varied natural language) or text into one of several predefined classes, that is, intents.

InputType: audio OutputType: text

DepthEstimationInputs Objects

class DepthEstimationInputs(Inputs)

[view_source]

Input parameters for DepthEstimation.

__init__

def __init__(node=None)

[view_source]

Initialize DepthEstimationInputs.

DepthEstimationOutputs Objects

class DepthEstimationOutputs(Outputs)

[view_source]

Output parameters for DepthEstimation.

__init__

def __init__(node=None)

[view_source]

Initialize DepthEstimationOutputs.

DepthEstimation Objects

class DepthEstimation(AssetNode[DepthEstimationInputs,
DepthEstimationOutputs])

[view_source]

DepthEstimation node.

Depth estimation is a computational process that determines the distance of objects from a viewpoint, typically using visual data from cameras or sensors to create a three-dimensional understanding of a scene.

InputType: image OutputType: text

ConnectorInputs Objects

class ConnectorInputs(Inputs)

[view_source]

Input parameters for Connector.

__init__

def __init__(node=None)

[view_source]

Initialize ConnectorInputs.

ConnectorOutputs Objects

class ConnectorOutputs(Outputs)

[view_source]

Output parameters for Connector.

__init__

def __init__(node=None)

[view_source]

Initialize ConnectorOutputs.

Connector Objects

class Connector(AssetNode[ConnectorInputs, ConnectorOutputs])

[view_source]

Connector node.

Connectors are integration that allow you to connect your AI agents to external tools

InputType: text OutputType: text

SpeakerRecognitionInputs Objects

class SpeakerRecognitionInputs(Inputs)

[view_source]

Input parameters for SpeakerRecognition.

__init__

def __init__(node=None)

[view_source]

Initialize SpeakerRecognitionInputs.

SpeakerRecognitionOutputs Objects

class SpeakerRecognitionOutputs(Outputs)

[view_source]

Output parameters for SpeakerRecognition.

__init__

def __init__(node=None)

[view_source]

Initialize SpeakerRecognitionOutputs.

SpeakerRecognition Objects

class SpeakerRecognition(AssetNode[SpeakerRecognitionInputs,
SpeakerRecognitionOutputs])

[view_source]

SpeakerRecognition node.

In speaker identification, an utterance from an unknown speaker is analyzed and compared with speech models of known speakers.

InputType: audio OutputType: label

SyntaxAnalysisInputs Objects

class SyntaxAnalysisInputs(Inputs)

[view_source]

Input parameters for SyntaxAnalysis.

__init__

def __init__(node=None)

[view_source]

Initialize SyntaxAnalysisInputs.

SyntaxAnalysisOutputs Objects

class SyntaxAnalysisOutputs(Outputs)

[view_source]

Output parameters for SyntaxAnalysis.

__init__

def __init__(node=None)

[view_source]

Initialize SyntaxAnalysisOutputs.

SyntaxAnalysis Objects

class SyntaxAnalysis(AssetNode[SyntaxAnalysisInputs, SyntaxAnalysisOutputs])

[view_source]

SyntaxAnalysis node.

Is the process of analyzing natural language with the rules of a formal grammar. Grammatical rules are applied to categories and groups of words, not individual words. Syntactic analysis basically assigns a semantic structure to text.

InputType: text OutputType: text

EntitySentimentAnalysisInputs Objects

class EntitySentimentAnalysisInputs(Inputs)

[view_source]

Input parameters for EntitySentimentAnalysis.

__init__

def __init__(node=None)

[view_source]

Initialize EntitySentimentAnalysisInputs.

EntitySentimentAnalysisOutputs Objects

class EntitySentimentAnalysisOutputs(Outputs)

[view_source]

Output parameters for EntitySentimentAnalysis.

__init__

def __init__(node=None)

[view_source]

Initialize EntitySentimentAnalysisOutputs.

EntitySentimentAnalysis Objects

class EntitySentimentAnalysis(AssetNode[EntitySentimentAnalysisInputs,
EntitySentimentAnalysisOutputs])

[view_source]

EntitySentimentAnalysis node.

Entity Sentiment Analysis combines both entity analysis and sentiment analysis and attempts to determine the sentiment (positive or negative) expressed about entities within the text.

InputType: text OutputType: label

ClassificationMetricInputs Objects

class ClassificationMetricInputs(Inputs)

[view_source]

Input parameters for ClassificationMetric.

__init__

def __init__(node=None)

[view_source]

Initialize ClassificationMetricInputs.

ClassificationMetricOutputs Objects

class ClassificationMetricOutputs(Outputs)

[view_source]

Output parameters for ClassificationMetric.

__init__

def __init__(node=None)

[view_source]

Initialize ClassificationMetricOutputs.

ClassificationMetric Objects

class ClassificationMetric(BaseMetric[ClassificationMetricInputs,
ClassificationMetricOutputs])

[view_source]

ClassificationMetric node.

A Classification Metric is a quantitative measure used to evaluate the quality and effectiveness of classification models.

InputType: text OutputType: text

TextDetectionInputs Objects

class TextDetectionInputs(Inputs)

[view_source]

Input parameters for TextDetection.

__init__

def __init__(node=None)

[view_source]

Initialize TextDetectionInputs.

TextDetectionOutputs Objects

class TextDetectionOutputs(Outputs)

[view_source]

Output parameters for TextDetection.

__init__

def __init__(node=None)

[view_source]

Initialize TextDetectionOutputs.

TextDetection Objects

class TextDetection(AssetNode[TextDetectionInputs, TextDetectionOutputs])

[view_source]

TextDetection node.

detect text regions in the complex background and label them with bounding boxes.

InputType: image OutputType: text

GuardrailsInputs Objects

class GuardrailsInputs(Inputs)

[view_source]

Input parameters for Guardrails.

__init__

def __init__(node=None)

[view_source]

Initialize GuardrailsInputs.

GuardrailsOutputs Objects

class GuardrailsOutputs(Outputs)

[view_source]

Output parameters for Guardrails.

__init__

def __init__(node=None)

[view_source]

Initialize GuardrailsOutputs.

Guardrails Objects

class Guardrails(AssetNode[GuardrailsInputs, GuardrailsOutputs])

[view_source]

Guardrails node.

Guardrails are governance rules that enforce security, compliance, and operational best practices, helping prevent mistakes and detect suspicious activity

InputType: text OutputType: text

EmotionDetectionInputs Objects

class EmotionDetectionInputs(Inputs)

[view_source]

Input parameters for EmotionDetection.

__init__

def __init__(node=None)

[view_source]

Initialize EmotionDetectionInputs.

EmotionDetectionOutputs Objects

class EmotionDetectionOutputs(Outputs)

[view_source]

Output parameters for EmotionDetection.

__init__

def __init__(node=None)

[view_source]

Initialize EmotionDetectionOutputs.

EmotionDetection Objects

class EmotionDetection(AssetNode[EmotionDetectionInputs,
EmotionDetectionOutputs])

[view_source]

EmotionDetection node.

Identifies human emotions from text or audio, enhancing user experience in chatbots or customer feedback analysis.

InputType: text OutputType: label

VideoForcedAlignmentInputs Objects

class VideoForcedAlignmentInputs(Inputs)

[view_source]

Input parameters for VideoForcedAlignment.

__init__

def __init__(node=None)

[view_source]

Initialize VideoForcedAlignmentInputs.

VideoForcedAlignmentOutputs Objects

class VideoForcedAlignmentOutputs(Outputs)

[view_source]

Output parameters for VideoForcedAlignment.

__init__

def __init__(node=None)

[view_source]

Initialize VideoForcedAlignmentOutputs.

VideoForcedAlignment Objects

class VideoForcedAlignment(AssetNode[VideoForcedAlignmentInputs,
VideoForcedAlignmentOutputs])

[view_source]

VideoForcedAlignment node.

Aligns the transcription of spoken content in a video with its corresponding timecodes, facilitating subtitle creation.

InputType: video OutputType: video

ImageContentModerationInputs Objects

class ImageContentModerationInputs(Inputs)

[view_source]

Input parameters for ImageContentModeration.

__init__

def __init__(node=None)

[view_source]

Initialize ImageContentModerationInputs.

ImageContentModerationOutputs Objects

class ImageContentModerationOutputs(Outputs)

[view_source]

Output parameters for ImageContentModeration.

__init__

def __init__(node=None)

[view_source]

Initialize ImageContentModerationOutputs.

ImageContentModeration Objects

class ImageContentModeration(AssetNode[ImageContentModerationInputs,
ImageContentModerationOutputs])

[view_source]

ImageContentModeration node.

Detects and filters out inappropriate or harmful images, essential for platforms with user-generated visual content.

InputType: image OutputType: label

TextSummarizationInputs Objects

class TextSummarizationInputs(Inputs)

[view_source]

Input parameters for TextSummarization.

__init__

def __init__(node=None)

[view_source]

Initialize TextSummarizationInputs.

TextSummarizationOutputs Objects

class TextSummarizationOutputs(Outputs)

[view_source]

Output parameters for TextSummarization.

__init__

def __init__(node=None)

[view_source]

Initialize TextSummarizationOutputs.

TextSummarization Objects

class TextSummarization(AssetNode[TextSummarizationInputs,
TextSummarizationOutputs])

[view_source]

TextSummarization node.

Extracts the main points from a larger body of text, producing a concise summary without losing the primary message.

InputType: text OutputType: text

ImageToVideoGenerationInputs Objects

class ImageToVideoGenerationInputs(Inputs)

[view_source]

Input parameters for ImageToVideoGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize ImageToVideoGenerationInputs.

ImageToVideoGenerationOutputs Objects

class ImageToVideoGenerationOutputs(Outputs)

[view_source]

Output parameters for ImageToVideoGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize ImageToVideoGenerationOutputs.

ImageToVideoGeneration Objects

class ImageToVideoGeneration(AssetNode[ImageToVideoGenerationInputs,
ImageToVideoGenerationOutputs])

[view_source]

ImageToVideoGeneration node.

The Image To Video Generation function transforms a series of static images into a cohesive, dynamic video sequence, often incorporating transitions, effects, and synchronization with audio to create a visually engaging narrative.

InputType: image OutputType: video

VideoUnderstandingInputs Objects

class VideoUnderstandingInputs(Inputs)

[view_source]

Input parameters for VideoUnderstanding.

__init__

def __init__(node=None)

[view_source]

Initialize VideoUnderstandingInputs.

VideoUnderstandingOutputs Objects

class VideoUnderstandingOutputs(Outputs)

[view_source]

Output parameters for VideoUnderstanding.

__init__

def __init__(node=None)

[view_source]

Initialize VideoUnderstandingOutputs.

VideoUnderstanding Objects

class VideoUnderstanding(AssetNode[VideoUnderstandingInputs,
VideoUnderstandingOutputs])

[view_source]

VideoUnderstanding node.

Video Understanding is the process of analyzing and interpreting video content to extract meaningful information, such as identifying objects, actions, events, and contextual relationships within the footage.

InputType: video OutputType: text

TextGenerationMetricDefaultInputs Objects

class TextGenerationMetricDefaultInputs(Inputs)

[view_source]

Input parameters for TextGenerationMetricDefault.

__init__

def __init__(node=None)

[view_source]

Initialize TextGenerationMetricDefaultInputs.

TextGenerationMetricDefaultOutputs Objects

class TextGenerationMetricDefaultOutputs(Outputs)

[view_source]

Output parameters for TextGenerationMetricDefault.

__init__

def __init__(node=None)

[view_source]

Initialize TextGenerationMetricDefaultOutputs.

TextGenerationMetricDefault Objects

class TextGenerationMetricDefault(
BaseMetric[TextGenerationMetricDefaultInputs,
TextGenerationMetricDefaultOutputs])

[view_source]

TextGenerationMetricDefault node.

The "Text Generation Metric Default" function provides a standard set of evaluation metrics for assessing the quality and performance of text generation models.

InputType: text OutputType: text

TextToVideoGenerationInputs Objects

class TextToVideoGenerationInputs(Inputs)

[view_source]

Input parameters for TextToVideoGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize TextToVideoGenerationInputs.

TextToVideoGenerationOutputs Objects

class TextToVideoGenerationOutputs(Outputs)

[view_source]

Output parameters for TextToVideoGeneration.

__init__

def __init__(node=None)

[view_source]

Initialize TextToVideoGenerationOutputs.

TextToVideoGeneration Objects

class TextToVideoGeneration(AssetNode[TextToVideoGenerationInputs,
TextToVideoGenerationOutputs])

[view_source]

TextToVideoGeneration node.

Text To Video Generation is a process that converts written descriptions or scripts into dynamic, visual video content using advanced algorithms and artificial intelligence.

InputType: text OutputType: video

VideoLabelDetectionInputs Objects

class VideoLabelDetectionInputs(Inputs)

[view_source]

Input parameters for VideoLabelDetection.

__init__

def __init__(node=None)

[view_source]

Initialize VideoLabelDetectionInputs.

VideoLabelDetectionOutputs Objects

class VideoLabelDetectionOutputs(Outputs)

[view_source]

Output parameters for VideoLabelDetection.

__init__

def __init__(node=None)

[view_source]

Initialize VideoLabelDetectionOutputs.

VideoLabelDetection Objects

class VideoLabelDetection(AssetNode[VideoLabelDetectionInputs,
VideoLabelDetectionOutputs])

[view_source]

VideoLabelDetection node.

Identifies and tags objects, scenes, or activities within a video. Useful for content indexing and recommendation systems.

InputType: video OutputType: label

TextSpamDetectionInputs Objects

class TextSpamDetectionInputs(Inputs)

[view_source]

Input parameters for TextSpamDetection.

__init__

def __init__(node=None)

[view_source]

Initialize TextSpamDetectionInputs.

TextSpamDetectionOutputs Objects

class TextSpamDetectionOutputs(Outputs)

[view_source]

Output parameters for TextSpamDetection.

__init__

def __init__(node=None)

[view_source]

Initialize TextSpamDetectionOutputs.

TextSpamDetection Objects

class TextSpamDetection(AssetNode[TextSpamDetectionInputs,
TextSpamDetectionOutputs])

[view_source]

TextSpamDetection node.

Identifies and filters out unwanted or irrelevant text content, ideal for moderating user-generated content or ensuring quality in communication platforms.

InputType: text OutputType: label

TextContentModerationInputs Objects

class TextContentModerationInputs(Inputs)

[view_source]

Input parameters for TextContentModeration.

__init__

def __init__(node=None)

[view_source]

Initialize TextContentModerationInputs.

TextContentModerationOutputs Objects

class TextContentModerationOutputs(Outputs)

[view_source]

Output parameters for TextContentModeration.

__init__

def __init__(node=None)

[view_source]

Initialize TextContentModerationOutputs.

TextContentModeration Objects

class TextContentModeration(AssetNode[TextContentModerationInputs,
TextContentModerationOutputs])

[view_source]

TextContentModeration node.

Scans and identifies potentially harmful, offensive, or inappropriate textual content, ensuring safer user environments.

InputType: text OutputType: label

AudioTranscriptImprovementInputs Objects

class AudioTranscriptImprovementInputs(Inputs)

[view_source]

Input parameters for AudioTranscriptImprovement.

__init__

def __init__(node=None)

[view_source]

Initialize AudioTranscriptImprovementInputs.

AudioTranscriptImprovementOutputs Objects

class AudioTranscriptImprovementOutputs(Outputs)

[view_source]

Output parameters for AudioTranscriptImprovement.

__init__

def __init__(node=None)

[view_source]

Initialize AudioTranscriptImprovementOutputs.

AudioTranscriptImprovement Objects

class AudioTranscriptImprovement(AssetNode[AudioTranscriptImprovementInputs,
AudioTranscriptImprovementOutputs])

[view_source]

AudioTranscriptImprovement node.

Refines and corrects transcriptions generated from audio data, improving readability and accuracy.

InputType: audio OutputType: text

AudioTranscriptAnalysisInputs Objects

class AudioTranscriptAnalysisInputs(Inputs)

[view_source]

Input parameters for AudioTranscriptAnalysis.

__init__

def __init__(node=None)

[view_source]

Initialize AudioTranscriptAnalysisInputs.

AudioTranscriptAnalysisOutputs Objects

class AudioTranscriptAnalysisOutputs(Outputs)

[view_source]

Output parameters for AudioTranscriptAnalysis.

__init__

def __init__(node=None)

[view_source]

Initialize AudioTranscriptAnalysisOutputs.

AudioTranscriptAnalysis Objects

class AudioTranscriptAnalysis(AssetNode[AudioTranscriptAnalysisInputs,
AudioTranscriptAnalysisOutputs])

[view_source]

AudioTranscriptAnalysis node.

Analyzes transcribed audio data for insights, patterns, or specific information extraction.

InputType: audio OutputType: text

SpeechNonSpeechClassificationInputs Objects

class SpeechNonSpeechClassificationInputs(Inputs)

[view_source]

Input parameters for SpeechNonSpeechClassification.

__init__

def __init__(node=None)

[view_source]

Initialize SpeechNonSpeechClassificationInputs.

SpeechNonSpeechClassificationOutputs Objects

class SpeechNonSpeechClassificationOutputs(Outputs)

[view_source]

Output parameters for SpeechNonSpeechClassification.

__init__

def __init__(node=None)

[view_source]

Initialize SpeechNonSpeechClassificationOutputs.

SpeechNonSpeechClassification Objects

class SpeechNonSpeechClassification(
AssetNode[SpeechNonSpeechClassificationInputs,
SpeechNonSpeechClassificationOutputs])

[view_source]

SpeechNonSpeechClassification node.

Differentiates between speech and non-speech audio segments. Great for editing software and transcription services to exclude irrelevant audio.

InputType: audio OutputType: label

AudioGenerationMetricInputs Objects

class AudioGenerationMetricInputs(Inputs)

[view_source]

Input parameters for AudioGenerationMetric.

__init__

def __init__(node=None)

[view_source]

Initialize AudioGenerationMetricInputs.

AudioGenerationMetricOutputs Objects

class AudioGenerationMetricOutputs(Outputs)

[view_source]

Output parameters for AudioGenerationMetric.

__init__

def __init__(node=None)

[view_source]

Initialize AudioGenerationMetricOutputs.

AudioGenerationMetric Objects

class AudioGenerationMetric(BaseMetric[AudioGenerationMetricInputs,
AudioGenerationMetricOutputs])

[view_source]

AudioGenerationMetric node.

The Audio Generation Metric is a quantitative measure used to evaluate the quality, accuracy, and overall performance of audio generated by artificial intelligence systems, often considering factors such as fidelity, intelligibility, and similarity to human-produced audio.

InputType: text OutputType: text

NamedEntityRecognitionInputs Objects

class NamedEntityRecognitionInputs(Inputs)

[view_source]

Input parameters for NamedEntityRecognition.

__init__

def __init__(node=None)

[view_source]

Initialize NamedEntityRecognitionInputs.

NamedEntityRecognitionOutputs Objects

class NamedEntityRecognitionOutputs(Outputs)

[view_source]

Output parameters for NamedEntityRecognition.

__init__

def __init__(node=None)

[view_source]

Initialize NamedEntityRecognitionOutputs.

NamedEntityRecognition Objects

class NamedEntityRecognition(AssetNode[NamedEntityRecognitionInputs,
NamedEntityRecognitionOutputs])

[view_source]

NamedEntityRecognition node.

Identifies and classifies named entities (e.g., persons, organizations, locations) within text. Useful for information extraction, content tagging, and search enhancements.

InputType: text OutputType: label

SpeechSynthesisInputs Objects

class SpeechSynthesisInputs(Inputs)

[view_source]

Input parameters for SpeechSynthesis.

__init__

def __init__(node=None)

[view_source]

Initialize SpeechSynthesisInputs.

SpeechSynthesisOutputs Objects

class SpeechSynthesisOutputs(Outputs)

[view_source]

Output parameters for SpeechSynthesis.

__init__

def __init__(node=None)

[view_source]

Initialize SpeechSynthesisOutputs.

SpeechSynthesis Objects

class SpeechSynthesis(AssetNode[SpeechSynthesisInputs,
SpeechSynthesisOutputs])

[view_source]

SpeechSynthesis node.

Generates human-like speech from written text. Ideal for text-to-speech applications, audiobooks, and voice assistants.

InputType: text OutputType: audio

DocumentInformationExtractionInputs Objects

class DocumentInformationExtractionInputs(Inputs)

[view_source]

Input parameters for DocumentInformationExtraction.

__init__

def __init__(node=None)

[view_source]

Initialize DocumentInformationExtractionInputs.

DocumentInformationExtractionOutputs Objects

class DocumentInformationExtractionOutputs(Outputs)

[view_source]

Output parameters for DocumentInformationExtraction.

__init__

def __init__(node=None)

[view_source]

Initialize DocumentInformationExtractionOutputs.

DocumentInformationExtraction Objects

class DocumentInformationExtraction(
AssetNode[DocumentInformationExtractionInputs,
DocumentInformationExtractionOutputs])

[view_source]

DocumentInformationExtraction node.

Document Information Extraction is the process of automatically identifying, extracting, and structuring relevant data from unstructured or semi-structured documents, such as invoices, receipts, contracts, and forms, to facilitate easier data management and analysis.

InputType: image OutputType: text

OcrInputs Objects

class OcrInputs(Inputs)

[view_source]

Input parameters for Ocr.

__init__

def __init__(node=None)

[view_source]

Initialize OcrInputs.

OcrOutputs Objects

class OcrOutputs(Outputs)

[view_source]

Output parameters for Ocr.

__init__

def __init__(node=None)

[view_source]

Initialize OcrOutputs.

Ocr Objects

class Ocr(AssetNode[OcrInputs, OcrOutputs])

[view_source]

Ocr node.

Converts images of typed, handwritten, or printed text into machine-encoded text. Used in digitizing printed texts for data retrieval.

InputType: image OutputType: text

SubtitlingTranslationInputs Objects

class SubtitlingTranslationInputs(Inputs)

[view_source]

Input parameters for SubtitlingTranslation.

__init__

def __init__(node=None)

[view_source]

Initialize SubtitlingTranslationInputs.

SubtitlingTranslationOutputs Objects

class SubtitlingTranslationOutputs(Outputs)

[view_source]

Output parameters for SubtitlingTranslation.

__init__

def __init__(node=None)

[view_source]

Initialize SubtitlingTranslationOutputs.

SubtitlingTranslation Objects

class SubtitlingTranslation(AssetNode[SubtitlingTranslationInputs,
SubtitlingTranslationOutputs])

[view_source]

SubtitlingTranslation node.

Converts the text of subtitles from one language to another, ensuring context and cultural nuances are maintained. Essential for global content distribution.

InputType: text OutputType: text

TextToAudioInputs Objects

class TextToAudioInputs(Inputs)

[view_source]

Input parameters for TextToAudio.

__init__

def __init__(node=None)

[view_source]

Initialize TextToAudioInputs.

TextToAudioOutputs Objects

class TextToAudioOutputs(Outputs)

[view_source]

Output parameters for TextToAudio.

__init__

def __init__(node=None)

[view_source]

Initialize TextToAudioOutputs.

TextToAudio Objects

class TextToAudio(AssetNode[TextToAudioInputs, TextToAudioOutputs])

[view_source]

TextToAudio node.

The Text to Audio function converts written text into spoken words, allowing users to listen to the content instead of reading it.

InputType: text OutputType: audio

MultilingualSpeechRecognitionInputs Objects

class MultilingualSpeechRecognitionInputs(Inputs)

[view_source]

Input parameters for MultilingualSpeechRecognition.

__init__

def __init__(node=None)

[view_source]

Initialize MultilingualSpeechRecognitionInputs.

MultilingualSpeechRecognitionOutputs Objects

class MultilingualSpeechRecognitionOutputs(Outputs)

[view_source]

Output parameters for MultilingualSpeechRecognition.

__init__

def __init__(node=None)

[view_source]

Initialize MultilingualSpeechRecognitionOutputs.

MultilingualSpeechRecognition Objects

class MultilingualSpeechRecognition(
AssetNode[MultilingualSpeechRecognitionInputs,
MultilingualSpeechRecognitionOutputs])

[view_source]

MultilingualSpeechRecognition node.

Multilingual Speech Recognition is a technology that enables the automatic transcription of spoken language into text across multiple languages, allowing for seamless communication and understanding in diverse linguistic contexts.

InputType: audio OutputType: text

OffensiveLanguageIdentificationInputs Objects

class OffensiveLanguageIdentificationInputs(Inputs)

[view_source]

Input parameters for OffensiveLanguageIdentification.

__init__

def __init__(node=None)

[view_source]

Initialize OffensiveLanguageIdentificationInputs.

OffensiveLanguageIdentificationOutputs Objects

class OffensiveLanguageIdentificationOutputs(Outputs)

[view_source]

Output parameters for OffensiveLanguageIdentification.

__init__

def __init__(node=None)

[view_source]

Initialize OffensiveLanguageIdentificationOutputs.

OffensiveLanguageIdentification Objects

class OffensiveLanguageIdentification(
AssetNode[OffensiveLanguageIdentificationInputs,
OffensiveLanguageIdentificationOutputs])

[view_source]

OffensiveLanguageIdentification node.

Detects language or phrases that might be considered offensive, aiding in content moderation and creating respectful user interactions.

InputType: text OutputType: label

BenchmarkScoringMtInputs Objects

class BenchmarkScoringMtInputs(Inputs)

[view_source]

Input parameters for BenchmarkScoringMt.

__init__

def __init__(node=None)

[view_source]

Initialize BenchmarkScoringMtInputs.

BenchmarkScoringMtOutputs Objects

class BenchmarkScoringMtOutputs(Outputs)

[view_source]

Output parameters for BenchmarkScoringMt.

__init__

def __init__(node=None)

[view_source]

Initialize BenchmarkScoringMtOutputs.

BenchmarkScoringMt Objects

class BenchmarkScoringMt(AssetNode[BenchmarkScoringMtInputs,
BenchmarkScoringMtOutputs])

[view_source]

BenchmarkScoringMt node.

Benchmark Scoring MT is a function designed to evaluate and score machine translation systems by comparing their output against a set of predefined benchmarks, thereby assessing their accuracy and performance.

InputType: text OutputType: label

SpeakerDiarizationAudioInputs Objects

class SpeakerDiarizationAudioInputs(Inputs)

[view_source]

Input parameters for SpeakerDiarizationAudio.

__init__

def __init__(node=None)

[view_source]

Initialize SpeakerDiarizationAudioInputs.

SpeakerDiarizationAudioOutputs Objects

class SpeakerDiarizationAudioOutputs(Outputs)

[view_source]

Output parameters for SpeakerDiarizationAudio.

__init__

def __init__(node=None)

[view_source]

Initialize SpeakerDiarizationAudioOutputs.

SpeakerDiarizationAudio Objects

class SpeakerDiarizationAudio(BaseSegmentor[SpeakerDiarizationAudioInputs,
SpeakerDiarizationAudioOutputs])

[view_source]

SpeakerDiarizationAudio node.

Identifies individual speakers and their respective speech segments within an audio clip. Ideal for multi-speaker recordings or conference calls.

InputType: audio OutputType: label

VoiceCloningInputs Objects

class VoiceCloningInputs(Inputs)

[view_source]

Input parameters for VoiceCloning.

__init__

def __init__(node=None)

[view_source]

Initialize VoiceCloningInputs.

VoiceCloningOutputs Objects

class VoiceCloningOutputs(Outputs)

[view_source]

Output parameters for VoiceCloning.

__init__

def __init__(node=None)

[view_source]

Initialize VoiceCloningOutputs.

VoiceCloning Objects

class VoiceCloning(AssetNode[VoiceCloningInputs, VoiceCloningOutputs])

[view_source]

VoiceCloning node.

Replicates a person's voice based on a sample, allowing for the generation of speech in that person's tone and style. Used cautiously due to ethical considerations.

InputType: text OutputType: audio

SearchInputs Objects

class SearchInputs(Inputs)

[view_source]

Input parameters for Search.

__init__

def __init__(node=None)

[view_source]

Initialize SearchInputs.

SearchOutputs Objects

class SearchOutputs(Outputs)

[view_source]

Output parameters for Search.

__init__

def __init__(node=None)

[view_source]

Initialize SearchOutputs.

Search Objects

class Search(AssetNode[SearchInputs, SearchOutputs])

[view_source]

Search node.

An algorithm that identifies and returns data or items that match particular keywords or conditions from a dataset. A fundamental tool for databases and websites.

InputType: text OutputType: text

ObjectDetectionInputs Objects

class ObjectDetectionInputs(Inputs)

[view_source]

Input parameters for ObjectDetection.

__init__

def __init__(node=None)

[view_source]

Initialize ObjectDetectionInputs.

ObjectDetectionOutputs Objects

class ObjectDetectionOutputs(Outputs)

[view_source]

Output parameters for ObjectDetection.

__init__

def __init__(node=None)

[view_source]

Initialize ObjectDetectionOutputs.

ObjectDetection Objects

class ObjectDetection(AssetNode[ObjectDetectionInputs,
ObjectDetectionOutputs])

[view_source]

ObjectDetection node.

Object Detection is a computer vision technology that identifies and locates objects within an image, typically by drawing bounding boxes around the detected objects and classifying them into predefined categories.

InputType: video OutputType: text

DiacritizationInputs Objects

class DiacritizationInputs(Inputs)

[view_source]

Input parameters for Diacritization.

__init__

def __init__(node=None)

[view_source]

Initialize DiacritizationInputs.

DiacritizationOutputs Objects

class DiacritizationOutputs(Outputs)

[view_source]

Output parameters for Diacritization.

__init__

def __init__(node=None)

[view_source]

Initialize DiacritizationOutputs.

Diacritization Objects

class Diacritization(AssetNode[DiacritizationInputs, DiacritizationOutputs])

[view_source]

Diacritization node.

Adds diacritical marks to text, essential for languages where meaning can change based on diacritics.

InputType: text OutputType: text

SpeakerDiarizationVideoInputs Objects

class SpeakerDiarizationVideoInputs(Inputs)

[view_source]

Input parameters for SpeakerDiarizationVideo.

__init__

def __init__(node=None)

[view_source]

Initialize SpeakerDiarizationVideoInputs.

SpeakerDiarizationVideoOutputs Objects

class SpeakerDiarizationVideoOutputs(Outputs)

[view_source]

Output parameters for SpeakerDiarizationVideo.

__init__

def __init__(node=None)

[view_source]

Initialize SpeakerDiarizationVideoOutputs.

SpeakerDiarizationVideo Objects

class SpeakerDiarizationVideo(AssetNode[SpeakerDiarizationVideoInputs,
SpeakerDiarizationVideoOutputs])

[view_source]

SpeakerDiarizationVideo node.

Segments a video based on different speakers, identifying when each individual speaks. Useful for transcriptions and understanding multi-person conversations.

InputType: video OutputType: label

AudioForcedAlignmentInputs Objects

class AudioForcedAlignmentInputs(Inputs)

[view_source]

Input parameters for AudioForcedAlignment.

__init__

def __init__(node=None)

[view_source]

Initialize AudioForcedAlignmentInputs.

AudioForcedAlignmentOutputs Objects

class AudioForcedAlignmentOutputs(Outputs)

[view_source]

Output parameters for AudioForcedAlignment.

__init__

def __init__(node=None)

[view_source]

Initialize AudioForcedAlignmentOutputs.

AudioForcedAlignment Objects

class AudioForcedAlignment(AssetNode[AudioForcedAlignmentInputs,
AudioForcedAlignmentOutputs])

[view_source]

AudioForcedAlignment node.

Synchronizes phonetic and phonological text with the corresponding segments in an audio file. Useful in linguistic research and detailed transcription tasks.

InputType: audio OutputType: audio

TokenClassificationInputs Objects

class TokenClassificationInputs(Inputs)

[view_source]

Input parameters for TokenClassification.

__init__

def __init__(node=None)

[view_source]

Initialize TokenClassificationInputs.

TokenClassificationOutputs Objects

class TokenClassificationOutputs(Outputs)

[view_source]

Output parameters for TokenClassification.

__init__

def __init__(node=None)

[view_source]

Initialize TokenClassificationOutputs.

TokenClassification Objects

class TokenClassification(AssetNode[TokenClassificationInputs,
TokenClassificationOutputs])

[view_source]

TokenClassification node.

Token-level classification means that each token will be given a label, for example a part-of-speech tagger will classify each word as one particular part of speech.

InputType: text OutputType: label

TopicClassificationInputs Objects

class TopicClassificationInputs(Inputs)

[view_source]

Input parameters for TopicClassification.

__init__

def __init__(node=None)

[view_source]

Initialize TopicClassificationInputs.

TopicClassificationOutputs Objects

class TopicClassificationOutputs(Outputs)

[view_source]

Output parameters for TopicClassification.

__init__

def __init__(node=None)

[view_source]

Initialize TopicClassificationOutputs.

TopicClassification Objects

class TopicClassification(AssetNode[TopicClassificationInputs,
TopicClassificationOutputs])

[view_source]

TopicClassification node.

Assigns categories or topics to a piece of text based on its content, facilitating content organization and retrieval.

InputType: text OutputType: label

IntentClassificationInputs Objects

class IntentClassificationInputs(Inputs)

[view_source]

Input parameters for IntentClassification.

__init__

def __init__(node=None)

[view_source]

Initialize IntentClassificationInputs.

IntentClassificationOutputs Objects

class IntentClassificationOutputs(Outputs)

[view_source]

Output parameters for IntentClassification.

__init__

def __init__(node=None)

[view_source]

Initialize IntentClassificationOutputs.

IntentClassification Objects

class IntentClassification(AssetNode[IntentClassificationInputs,
IntentClassificationOutputs])

[view_source]

IntentClassification node.

Intent Classification is a natural language processing task that involves analyzing and categorizing user text input to determine the underlying purpose or goal behind the communication, such as booking a flight, asking for weather information, or setting a reminder.

InputType: text OutputType: label

VideoContentModerationInputs Objects

class VideoContentModerationInputs(Inputs)

[view_source]

Input parameters for VideoContentModeration.

__init__

def __init__(node=None)

[view_source]

Initialize VideoContentModerationInputs.

VideoContentModerationOutputs Objects

class VideoContentModerationOutputs(Outputs)

[view_source]

Output parameters for VideoContentModeration.

__init__

def __init__(node=None)

[view_source]

Initialize VideoContentModerationOutputs.

VideoContentModeration Objects

class VideoContentModeration(AssetNode[VideoContentModerationInputs,
VideoContentModerationOutputs])

[view_source]

VideoContentModeration node.

Automatically reviews video content to detect and possibly remove inappropriate or harmful material. Essential for user-generated content platforms.

InputType: video OutputType: label

TextGenerationMetricInputs Objects

class TextGenerationMetricInputs(Inputs)

[view_source]

Input parameters for TextGenerationMetric.

__init__

def __init__(node=None)

[view_source]

Initialize TextGenerationMetricInputs.

TextGenerationMetricOutputs Objects

class TextGenerationMetricOutputs(Outputs)

[view_source]

Output parameters for TextGenerationMetric.

__init__

def __init__(node=None)

[view_source]

Initialize TextGenerationMetricOutputs.

TextGenerationMetric Objects

class TextGenerationMetric(BaseMetric[TextGenerationMetricInputs,
TextGenerationMetricOutputs])

[view_source]

TextGenerationMetric node.

A Text Generation Metric is a quantitative measure used to evaluate the quality and effectiveness of text produced by natural language processing models, often assessing aspects such as coherence, relevance, fluency, and adherence to given prompts or instructions.

InputType: text OutputType: text

ImageEmbeddingInputs Objects

class ImageEmbeddingInputs(Inputs)

[view_source]

Input parameters for ImageEmbedding.

__init__

def __init__(node=None)

[view_source]

Initialize ImageEmbeddingInputs.

ImageEmbeddingOutputs Objects

class ImageEmbeddingOutputs(Outputs)

[view_source]

Output parameters for ImageEmbedding.

__init__

def __init__(node=None)

[view_source]

Initialize ImageEmbeddingOutputs.

ImageEmbedding Objects

class ImageEmbedding(AssetNode[ImageEmbeddingInputs, ImageEmbeddingOutputs])

[view_source]

ImageEmbedding node.

Image Embedding is a process that transforms an image into a fixed-dimensional vector representation, capturing its essential features and enabling efficient comparison, retrieval, and analysis in various machine learning and computer vision tasks.

InputType: image OutputType: text

ImageLabelDetectionInputs Objects

class ImageLabelDetectionInputs(Inputs)

[view_source]

Input parameters for ImageLabelDetection.

__init__

def __init__(node=None)

[view_source]

Initialize ImageLabelDetectionInputs.

ImageLabelDetectionOutputs Objects

class ImageLabelDetectionOutputs(Outputs)

[view_source]

Output parameters for ImageLabelDetection.

__init__

def __init__(node=None)

[view_source]

Initialize ImageLabelDetectionOutputs.

ImageLabelDetection Objects

class ImageLabelDetection(AssetNode[ImageLabelDetectionInputs,
ImageLabelDetectionOutputs])

[view_source]

ImageLabelDetection node.

Identifies objects, themes, or topics within images, useful for image categorization, search, and recommendation systems.

InputType: image OutputType: label

ImageColorizationInputs Objects

class ImageColorizationInputs(Inputs)

[view_source]

Input parameters for ImageColorization.

__init__

def __init__(node=None)

[view_source]

Initialize ImageColorizationInputs.

ImageColorizationOutputs Objects

class ImageColorizationOutputs(Outputs)

[view_source]

Output parameters for ImageColorization.

__init__

def __init__(node=None)

[view_source]

Initialize ImageColorizationOutputs.

ImageColorization Objects

class ImageColorization(AssetNode[ImageColorizationInputs,
ImageColorizationOutputs])

[view_source]

ImageColorization node.

Image colorization is a process that involves adding color to grayscale images, transforming them from black-and-white to full-color representations, often using advanced algorithms and machine learning techniques to predict and apply the appropriate hues and shades.

InputType: image OutputType: image

MetricAggregationInputs Objects

class MetricAggregationInputs(Inputs)

[view_source]

Input parameters for MetricAggregation.

__init__

def __init__(node=None)

[view_source]

Initialize MetricAggregationInputs.

MetricAggregationOutputs Objects

class MetricAggregationOutputs(Outputs)

[view_source]

Output parameters for MetricAggregation.

__init__

def __init__(node=None)

[view_source]

Initialize MetricAggregationOutputs.

MetricAggregation Objects

class MetricAggregation(BaseMetric[MetricAggregationInputs,
MetricAggregationOutputs])

[view_source]

MetricAggregation node.

Metric Aggregation is a function that computes and summarizes numerical data by applying statistical operations, such as averaging, summing, or finding the minimum and maximum values, to provide insights and facilitate analysis of large datasets.

InputType: text OutputType: text

InstanceSegmentationInputs Objects

class InstanceSegmentationInputs(Inputs)

[view_source]

Input parameters for InstanceSegmentation.

__init__

def __init__(node=None)

[view_source]

Initialize InstanceSegmentationInputs.

InstanceSegmentationOutputs Objects

class InstanceSegmentationOutputs(Outputs)

[view_source]

Output parameters for InstanceSegmentation.

__init__

def __init__(node=None)

[view_source]

Initialize InstanceSegmentationOutputs.

InstanceSegmentation Objects

class InstanceSegmentation(AssetNode[InstanceSegmentationInputs,
InstanceSegmentationOutputs])

[view_source]

InstanceSegmentation node.

Instance segmentation is a computer vision task that involves detecting and delineating each distinct object within an image, assigning a unique label and precise boundary to every individual instance of objects, even if they belong to the same category.

InputType: image OutputType: label

OtherMultipurposeInputs Objects

class OtherMultipurposeInputs(Inputs)

[view_source]

Input parameters for OtherMultipurpose.

__init__

def __init__(node=None)

[view_source]

Initialize OtherMultipurposeInputs.

OtherMultipurposeOutputs Objects

class OtherMultipurposeOutputs(Outputs)

[view_source]

Output parameters for OtherMultipurpose.

__init__

def __init__(node=None)

[view_source]

Initialize OtherMultipurposeOutputs.

OtherMultipurpose Objects

class OtherMultipurpose(AssetNode[OtherMultipurposeInputs,
OtherMultipurposeOutputs])

[view_source]

OtherMultipurpose node.

The "Other (Multipurpose)" function serves as a versatile category designed to accommodate a wide range of tasks and activities that do not fit neatly into predefined classifications, offering flexibility and adaptability for various needs.

InputType: text OutputType: text

SpeechTranslationInputs Objects

class SpeechTranslationInputs(Inputs)

[view_source]

Input parameters for SpeechTranslation.

__init__

def __init__(node=None)

[view_source]

Initialize SpeechTranslationInputs.

SpeechTranslationOutputs Objects

class SpeechTranslationOutputs(Outputs)

[view_source]

Output parameters for SpeechTranslation.

__init__

def __init__(node=None)

[view_source]

Initialize SpeechTranslationOutputs.

SpeechTranslation Objects

class SpeechTranslation(AssetNode[SpeechTranslationInputs,
SpeechTranslationOutputs])

[view_source]

SpeechTranslation node.

Speech Translation is a technology that converts spoken language in real-time from one language to another, enabling seamless communication between speakers of different languages.

InputType: audio OutputType: text

ReferencelessTextGenerationMetricDefaultInputs Objects

class ReferencelessTextGenerationMetricDefaultInputs(Inputs)

[view_source]

Input parameters for ReferencelessTextGenerationMetricDefault.

__init__

def __init__(node=None)

[view_source]

Initialize ReferencelessTextGenerationMetricDefaultInputs.

ReferencelessTextGenerationMetricDefaultOutputs Objects

class ReferencelessTextGenerationMetricDefaultOutputs(Outputs)

[view_source]

Output parameters for ReferencelessTextGenerationMetricDefault.

__init__

def __init__(node=None)

[view_source]

Initialize ReferencelessTextGenerationMetricDefaultOutputs.

ReferencelessTextGenerationMetricDefault Objects

class ReferencelessTextGenerationMetricDefault(
BaseMetric[ReferencelessTextGenerationMetricDefaultInputs,
ReferencelessTextGenerationMetricDefaultOutputs])

[view_source]

ReferencelessTextGenerationMetricDefault node.

The Referenceless Text Generation Metric Default is a function designed to evaluate the quality of generated text without relying on reference texts for comparison.

InputType: text OutputType: text

ReferencelessTextGenerationMetricInputs Objects

class ReferencelessTextGenerationMetricInputs(Inputs)

[view_source]

Input parameters for ReferencelessTextGenerationMetric.

__init__

def __init__(node=None)

[view_source]

Initialize ReferencelessTextGenerationMetricInputs.

ReferencelessTextGenerationMetricOutputs Objects

class ReferencelessTextGenerationMetricOutputs(Outputs)

[view_source]

Output parameters for ReferencelessTextGenerationMetric.

__init__

def __init__(node=None)

[view_source]

Initialize ReferencelessTextGenerationMetricOutputs.

ReferencelessTextGenerationMetric Objects

class ReferencelessTextGenerationMetric(
BaseMetric[ReferencelessTextGenerationMetricInputs,
ReferencelessTextGenerationMetricOutputs])

[view_source]

ReferencelessTextGenerationMetric node.

The Referenceless Text Generation Metric is a method for evaluating the quality of generated text without requiring a reference text for comparison, often leveraging models or algorithms to assess coherence, relevance, and fluency based on intrinsic properties of the text itself.

InputType: text OutputType: text

TextDenormalizationInputs Objects

class TextDenormalizationInputs(Inputs)

[view_source]

Input parameters for TextDenormalization.

__init__

def __init__(node=None)

[view_source]

Initialize TextDenormalizationInputs.

TextDenormalizationOutputs Objects

class TextDenormalizationOutputs(Outputs)

[view_source]

Output parameters for TextDenormalization.

__init__

def __init__(node=None)

[view_source]

Initialize TextDenormalizationOutputs.

TextDenormalization Objects

class TextDenormalization(AssetNode[TextDenormalizationInputs,
TextDenormalizationOutputs])

[view_source]

TextDenormalization node.

Converts standardized or normalized text into its original, often more readable, form. Useful in natural language generation tasks.

InputType: text OutputType: label

ImageCompressionInputs Objects

class ImageCompressionInputs(Inputs)

[view_source]

Input parameters for ImageCompression.

__init__

def __init__(node=None)

[view_source]

Initialize ImageCompressionInputs.

ImageCompressionOutputs Objects

class ImageCompressionOutputs(Outputs)

[view_source]

Output parameters for ImageCompression.

__init__

def __init__(node=None)

[view_source]

Initialize ImageCompressionOutputs.

ImageCompression Objects

class ImageCompression(AssetNode[ImageCompressionInputs,
ImageCompressionOutputs])

[view_source]

ImageCompression node.

Reduces the size of image files without significantly compromising their visual quality. Useful for optimizing storage and improving webpage load times.

InputType: image OutputType: image

TextClassificationInputs Objects

class TextClassificationInputs(Inputs)

[view_source]

Input parameters for TextClassification.

__init__

def __init__(node=None)

[view_source]

Initialize TextClassificationInputs.

TextClassificationOutputs Objects

class TextClassificationOutputs(Outputs)

[view_source]

Output parameters for TextClassification.

__init__

def __init__(node=None)

[view_source]

Initialize TextClassificationOutputs.

TextClassification Objects

class TextClassification(AssetNode[TextClassificationInputs,
TextClassificationOutputs])

[view_source]

TextClassification node.

Categorizes text into predefined groups or topics, facilitating content organization and targeted actions.

InputType: text OutputType: label

AsrAgeClassificationInputs Objects

class AsrAgeClassificationInputs(Inputs)

[view_source]

Input parameters for AsrAgeClassification.

__init__

def __init__(node=None)

[view_source]

Initialize AsrAgeClassificationInputs.

AsrAgeClassificationOutputs Objects

class AsrAgeClassificationOutputs(Outputs)

[view_source]

Output parameters for AsrAgeClassification.

__init__

def __init__(node=None)

[view_source]

Initialize AsrAgeClassificationOutputs.

AsrAgeClassification Objects

class AsrAgeClassification(AssetNode[AsrAgeClassificationInputs,
AsrAgeClassificationOutputs])

[view_source]

AsrAgeClassification node.

The ASR Age Classification function is designed to analyze audio recordings of speech to determine the speaker's age group by leveraging automatic speech recognition (ASR) technology and machine learning algorithms.

InputType: audio OutputType: label

AsrQualityEstimationInputs Objects

class AsrQualityEstimationInputs(Inputs)

[view_source]

Input parameters for AsrQualityEstimation.

__init__

def __init__(node=None)

[view_source]

Initialize AsrQualityEstimationInputs.

AsrQualityEstimationOutputs Objects

class AsrQualityEstimationOutputs(Outputs)

[view_source]

Output parameters for AsrQualityEstimation.

__init__

def __init__(node=None)

[view_source]

Initialize AsrQualityEstimationOutputs.

AsrQualityEstimation Objects

class AsrQualityEstimation(AssetNode[AsrQualityEstimationInputs,
AsrQualityEstimationOutputs])

[view_source]

AsrQualityEstimation node.

ASR Quality Estimation is a process that evaluates the accuracy and reliability of automatic speech recognition systems by analyzing their performance in transcribing spoken language into text.

InputType: text OutputType: label

Pipeline Objects

class Pipeline(DefaultPipeline)

[view_source]

Pipeline class for creating and managing AI processing pipelines.

text_normalization

def text_normalization(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextNormalization

[view_source]

Create a TextNormalization node.

Converts unstructured or non-standard textual data into a more readable and uniform format, dealing with abbreviations, numerals, and other non-standard words.

paraphrasing

def paraphrasing(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> Paraphrasing

[view_source]

Create a Paraphrasing node.

Express the meaning of the writer or speaker or something written or spoken using different words.

language_identification

def language_identification(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> LanguageIdentification

[view_source]

Create a LanguageIdentification node.

Detects the language in which a given text is written, aiding in multilingual platforms or content localization.

benchmark_scoring_asr

def benchmark_scoring_asr(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> BenchmarkScoringAsr

[view_source]

Create a BenchmarkScoringAsr node.

Benchmark Scoring ASR is a function that evaluates and compares the performance of automatic speech recognition systems by analyzing their accuracy, speed, and other relevant metrics against a standardized set of benchmarks.

multi_class_text_classification

def multi_class_text_classification(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> MultiClassTextClassification

[view_source]

Create a MultiClassTextClassification node.

Multi Class Text Classification is a natural language processing task that involves categorizing a given text into one of several predefined classes or categories based on its content.

speech_embedding

def speech_embedding(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SpeechEmbedding

[view_source]

Create a SpeechEmbedding node.

Transforms spoken content into a fixed-size vector in a high-dimensional space that captures the content's essence. Facilitates tasks like speech recognition and speaker verification.

document_image_parsing

def document_image_parsing(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> DocumentImageParsing

[view_source]

Create a DocumentImageParsing node.

Document Image Parsing is the process of analyzing and converting scanned or photographed images of documents into structured, machine-readable formats by identifying and extracting text, layout, and other relevant information.

translation

def translation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> Translation

[view_source]

Create a Translation node.

Converts text from one language to another while maintaining the original message's essence and context. Crucial for global communication.

audio_source_separation

def audio_source_separation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AudioSourceSeparation

[view_source]

Create a AudioSourceSeparation node.

Audio Source Separation is the process of separating a mixture (e.g. a pop band recording) into isolated sounds from individual sources (e.g. just the lead vocals).

speech_recognition

def speech_recognition(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SpeechRecognition

[view_source]

Create a SpeechRecognition node.

Converts spoken language into written text. Useful for transcription services, voice assistants, and applications requiring voice-to-text capabilities.

keyword_spotting

def keyword_spotting(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> KeywordSpotting

[view_source]

Create a KeywordSpotting node.

Keyword Spotting is a function that enables the detection and identification of specific words or phrases within a stream of audio, often used in voice- activated systems to trigger actions or commands based on recognized keywords.

part_of_speech_tagging

def part_of_speech_tagging(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> PartOfSpeechTagging

[view_source]

Create a PartOfSpeechTagging node.

Part of Speech Tagging is a natural language processing task that involves assigning each word in a sentence its corresponding part of speech, such as noun, verb, adjective, or adverb, based on its role and context within the sentence.

referenceless_audio_generation_metric

def referenceless_audio_generation_metric(
asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ReferencelessAudioGenerationMetric

[view_source]

Create a ReferencelessAudioGenerationMetric node.

The Referenceless Audio Generation Metric is a tool designed to evaluate the quality of generated audio content without the need for a reference or original audio sample for comparison.

voice_activity_detection

def voice_activity_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> VoiceActivityDetection

[view_source]

Create a VoiceActivityDetection node.

Determines when a person is speaking in an audio clip. It's an essential preprocessing step for other audio-related tasks.

sentiment_analysis

def sentiment_analysis(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SentimentAnalysis

[view_source]

Create a SentimentAnalysis node.

Determines the sentiment or emotion (e.g., positive, negative, neutral) of a piece of text, aiding in understanding user feedback or market sentiment.

subtitling

def subtitling(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> Subtitling

[view_source]

Create a Subtitling node.

Generates accurate subtitles for videos, enhancing accessibility for diverse audiences.

multi_label_text_classification

def multi_label_text_classification(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> MultiLabelTextClassification

[view_source]

Create a MultiLabelTextClassification node.

Multi Label Text Classification is a natural language processing task where a given text is analyzed and assigned multiple relevant labels or categories from a predefined set, allowing for the text to belong to more than one category simultaneously.

viseme_generation

def viseme_generation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> VisemeGeneration

[view_source]

Create a VisemeGeneration node.

Viseme Generation is the process of creating visual representations of phonemes, which are the distinct units of sound in speech, to synchronize lip movements with spoken words in animations or virtual avatars.

text_segmenation

def text_segmenation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextSegmenation

[view_source]

Create a TextSegmenation node.

Text Segmentation is the process of dividing a continuous text into meaningful units, such as words, sentences, or topics, to facilitate easier analysis and understanding.

zero_shot_classification

def zero_shot_classification(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ZeroShotClassification

[view_source]

Create a ZeroShotClassification node.

text_generation

def text_generation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextGeneration

[view_source]

Create a TextGeneration node.

Creates coherent and contextually relevant textual content based on prompts or certain parameters. Useful for chatbots, content creation, and data augmentation.

audio_intent_detection

def audio_intent_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AudioIntentDetection

[view_source]

Create a AudioIntentDetection node.

Audio Intent Detection is a process that involves analyzing audio signals to identify and interpret the underlying intentions or purposes behind spoken words, enabling systems to understand and respond appropriately to human speech.

entity_linking

def entity_linking(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> EntityLinking

[view_source]

Create a EntityLinking node.

Associates identified entities in the text with specific entries in a knowledge base or database.

connection

def connection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> Connection

[view_source]

Create a Connection node.

Connections are integration that allow you to connect your AI agents to external tools

visual_question_answering

def visual_question_answering(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> VisualQuestionAnswering

[view_source]

Create a VisualQuestionAnswering node.

Visual Question Answering (VQA) is a task in artificial intelligence that involves analyzing an image and providing accurate, contextually relevant answers to questions posed about the visual content of that image.

loglikelihood

def loglikelihood(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> Loglikelihood

[view_source]

Create a Loglikelihood node.

The Log Likelihood function measures the probability of observing the given data under a specific statistical model by taking the natural logarithm of the likelihood function, thereby transforming the product of probabilities into a sum, which simplifies the process of optimization and parameter estimation.

language_identification_audio

def language_identification_audio(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> LanguageIdentificationAudio

[view_source]

Create a LanguageIdentificationAudio node.

The Language Identification Audio function analyzes audio input to determine and identify the language being spoken.

fact_checking

def fact_checking(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> FactChecking

[view_source]

Create a FactChecking node.

Fact Checking is the process of verifying the accuracy and truthfulness of information, statements, or claims by cross-referencing with reliable sources and evidence.

table_question_answering

def table_question_answering(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TableQuestionAnswering

[view_source]

Create a TableQuestionAnswering node.

The task of question answering over tables is given an input table (or a set of tables) T and a natural language question Q (a user query), output the correct answer A

speech_classification

def speech_classification(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SpeechClassification

[view_source]

Create a SpeechClassification node.

Categorizes audio clips based on their content, aiding in content organization and targeted actions.

inverse_text_normalization

def inverse_text_normalization(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> InverseTextNormalization

[view_source]

Create a InverseTextNormalization node.

Inverse Text Normalization is the process of converting spoken or written language in its normalized form, such as numbers, dates, and abbreviations, back into their original, more complex or detailed textual representations.

multi_class_image_classification

def multi_class_image_classification(
asset_id: Union[str, asset.Asset], *args,
**kwargs) -> MultiClassImageClassification

[view_source]

Create a MultiClassImageClassification node.

Multi Class Image Classification is a machine learning task where an algorithm is trained to categorize images into one of several predefined classes or categories based on their visual content.

asr_gender_classification

def asr_gender_classification(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AsrGenderClassification

[view_source]

Create a AsrGenderClassification node.

The ASR Gender Classification function analyzes audio recordings to determine and classify the speaker's gender based on their voice characteristics.

summarization

def summarization(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> Summarization

[view_source]

Create a Summarization node.

Text summarization is the process of distilling the most important information from a source (or sources) to produce an abridged version for a particular user (or users) and task (or tasks)

topic_modeling

def topic_modeling(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TopicModeling

[view_source]

Create a TopicModeling node.

Topic modeling is a type of statistical modeling for discovering the abstract “topics” that occur in a collection of documents.

audio_reconstruction

def audio_reconstruction(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AudioReconstruction

[view_source]

Create a AudioReconstruction node.

Audio Reconstruction is the process of restoring or recreating audio signals from incomplete, damaged, or degraded recordings to achieve a high-quality, accurate representation of the original sound.

text_embedding

def text_embedding(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextEmbedding

[view_source]

Create a TextEmbedding node.

Text embedding is a process that converts text into numerical vectors, capturing the semantic meaning and contextual relationships of words or phrases, enabling machines to understand and analyze natural language more effectively.

detect_language_from_text

def detect_language_from_text(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> DetectLanguageFromText

[view_source]

Create a DetectLanguageFromText node.

Detect Language From Text

extract_audio_from_video

def extract_audio_from_video(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ExtractAudioFromVideo

[view_source]

Create a ExtractAudioFromVideo node.

Isolates and extracts audio tracks from video files, aiding in audio analysis or transcription tasks.

scene_detection

def scene_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SceneDetection

[view_source]

Create a SceneDetection node.

Scene detection is used for detecting transitions between shots in a video to split it into basic temporal segments.

text_to_image_generation

def text_to_image_generation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextToImageGeneration

[view_source]

Create a TextToImageGeneration node.

Creates a visual representation based on textual input, turning descriptions into pictorial forms. Used in creative processes and content generation.

auto_mask_generation

def auto_mask_generation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AutoMaskGeneration

[view_source]

Create a AutoMaskGeneration node.

Auto-mask generation refers to the automated process of creating masks in image processing or computer vision, typically for segmentation tasks. A mask is a binary or multi-class image that labels different parts of an image, usually separating the foreground (objects of interest) from the background, or identifying specific object classes in an image.

audio_language_identification

def audio_language_identification(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AudioLanguageIdentification

[view_source]

Create a AudioLanguageIdentification node.

Audio Language Identification is a process that involves analyzing an audio recording to determine the language being spoken.

facial_recognition

def facial_recognition(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> FacialRecognition

[view_source]

Create a FacialRecognition node.

A facial recognition system is a technology capable of matching a human face from a digital image or a video frame against a database of faces

question_answering

def question_answering(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> QuestionAnswering

[view_source]

Create a QuestionAnswering node.

building systems that automatically answer questions posed by humans in a natural language usually from a given text

image_impainting

def image_impainting(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ImageImpainting

[view_source]

Create a ImageImpainting node.

Image inpainting is a process that involves filling in missing or damaged parts of an image in a way that is visually coherent and seamlessly blends with the surrounding areas, often using advanced algorithms and techniques to restore the image to its original or intended appearance.

text_reconstruction

def text_reconstruction(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextReconstruction

[view_source]

Create a TextReconstruction node.

Text Reconstruction is a process that involves piecing together fragmented or incomplete text data to restore it to its original, coherent form.

script_execution

def script_execution(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ScriptExecution

[view_source]

Create a ScriptExecution node.

Script Execution refers to the process of running a set of programmed instructions or code within a computing environment, enabling the automated performance of tasks, calculations, or operations as defined by the script.

semantic_segmentation

def semantic_segmentation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SemanticSegmentation

[view_source]

Create a SemanticSegmentation node.

Semantic segmentation is a computer vision process that involves classifying each pixel in an image into a predefined category, effectively partitioning the image into meaningful segments based on the objects or regions they represent.

audio_emotion_detection

def audio_emotion_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AudioEmotionDetection

[view_source]

Create a AudioEmotionDetection node.

Audio Emotion Detection is a technology that analyzes vocal characteristics and patterns in audio recordings to identify and classify the emotional state of the speaker.

image_captioning

def image_captioning(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ImageCaptioning

[view_source]

Create a ImageCaptioning node.

Image Captioning is a process that involves generating a textual description of an image, typically using machine learning models to analyze the visual content and produce coherent and contextually relevant sentences that describe the objects, actions, and scenes depicted in the image.

split_on_linebreak

def split_on_linebreak(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SplitOnLinebreak

[view_source]

Create a SplitOnLinebreak node.

The "Split On Linebreak" function divides a given string into a list of substrings, using linebreaks (newline characters) as the points of separation.

style_transfer

def style_transfer(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> StyleTransfer

[view_source]

Create a StyleTransfer node.

Style Transfer is a technique in artificial intelligence that applies the visual style of one image (such as the brushstrokes of a famous painting) to the content of another image, effectively blending the artistic elements of the first image with the subject matter of the second.

base_model

def base_model(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> BaseModel

[view_source]

Create a BaseModel node.

The Base-Model function serves as a foundational framework designed to provide essential features and capabilities upon which more specialized or advanced models can be built and customized.

image_manipulation

def image_manipulation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ImageManipulation

[view_source]

Create a ImageManipulation node.

Image Manipulation refers to the process of altering or enhancing digital images using various techniques and tools to achieve desired visual effects, correct imperfections, or transform the image's appearance.

video_embedding

def video_embedding(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> VideoEmbedding

[view_source]

Create a VideoEmbedding node.

Video Embedding is a process that transforms video content into a fixed- dimensional vector representation, capturing essential features and patterns to facilitate tasks such as retrieval, classification, and recommendation.

dialect_detection

def dialect_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> DialectDetection

[view_source]

Create a DialectDetection node.

Identifies specific dialects within a language, aiding in localized content creation or user experience personalization.

fill_text_mask

def fill_text_mask(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> FillTextMask

[view_source]

Create a FillTextMask node.

Completes missing parts of a text based on the context, ideal for content generation or data augmentation tasks.

activity_detection

def activity_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ActivityDetection

[view_source]

Create a ActivityDetection node.

detection of the presence or absence of human speech, used in speech processing.

select_supplier_for_translation

def select_supplier_for_translation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SelectSupplierForTranslation

[view_source]

Create a SelectSupplierForTranslation node.

Supplier For Translation

expression_detection

def expression_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ExpressionDetection

[view_source]

Create a ExpressionDetection node.

Expression Detection is the process of identifying and analyzing facial expressions to interpret emotions or intentions using AI and computer vision techniques.

video_generation

def video_generation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> VideoGeneration

[view_source]

Create a VideoGeneration node.

Produces video content based on specific inputs or datasets. Can be used for simulations, animations, or even deepfake detection.

image_analysis

def image_analysis(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ImageAnalysis

[view_source]

Create a ImageAnalysis node.

Image analysis is the extraction of meaningful information from images

noise_removal

def noise_removal(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> NoiseRemoval

[view_source]

Create a NoiseRemoval node.

Noise Removal is a process that involves identifying and eliminating unwanted random variations or disturbances from an audio signal to enhance the clarity and quality of the underlying information.

image_and_video_analysis

def image_and_video_analysis(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ImageAndVideoAnalysis

[view_source]

Create a ImageAndVideoAnalysis node.

keyword_extraction

def keyword_extraction(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> KeywordExtraction

[view_source]

Create a KeywordExtraction node.

It helps concise the text and obtain relevant keywords Example use-cases are finding topics of interest from a news article and identifying the problems based on customer reviews and so.

split_on_silence

def split_on_silence(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SplitOnSilence

[view_source]

Create a SplitOnSilence node.

The "Split On Silence" function divides an audio recording into separate segments based on periods of silence, allowing for easier editing and analysis of individual sections.

intent_recognition

def intent_recognition(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> IntentRecognition

[view_source]

Create a IntentRecognition node.

classify the user's utterance (provided in varied natural language) or text into one of several predefined classes, that is, intents.

depth_estimation

def depth_estimation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> DepthEstimation

[view_source]

Create a DepthEstimation node.

Depth estimation is a computational process that determines the distance of objects from a viewpoint, typically using visual data from cameras or sensors to create a three-dimensional understanding of a scene.

connector

def connector(asset_id: Union[str, asset.Asset], *args, **kwargs) -> Connector

[view_source]

Create a Connector node.

Connectors are integration that allow you to connect your AI agents to external tools

speaker_recognition

def speaker_recognition(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SpeakerRecognition

[view_source]

Create a SpeakerRecognition node.

In speaker identification, an utterance from an unknown speaker is analyzed and compared with speech models of known speakers.

syntax_analysis

def syntax_analysis(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SyntaxAnalysis

[view_source]

Create a SyntaxAnalysis node.

Is the process of analyzing natural language with the rules of a formal grammar. Grammatical rules are applied to categories and groups of words, not individual words. Syntactic analysis basically assigns a semantic structure to text.

entity_sentiment_analysis

def entity_sentiment_analysis(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> EntitySentimentAnalysis

[view_source]

Create a EntitySentimentAnalysis node.

Entity Sentiment Analysis combines both entity analysis and sentiment analysis and attempts to determine the sentiment (positive or negative) expressed about entities within the text.

classification_metric

def classification_metric(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ClassificationMetric

[view_source]

Create a ClassificationMetric node.

A Classification Metric is a quantitative measure used to evaluate the quality and effectiveness of classification models.

text_detection

def text_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextDetection

[view_source]

Create a TextDetection node.

detect text regions in the complex background and label them with bounding boxes.

guardrails

def guardrails(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> Guardrails

[view_source]

Create a Guardrails node.

Guardrails are governance rules that enforce security, compliance, and operational best practices, helping prevent mistakes and detect suspicious activity

emotion_detection

def emotion_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> EmotionDetection

[view_source]

Create a EmotionDetection node.

Identifies human emotions from text or audio, enhancing user experience in chatbots or customer feedback analysis.

video_forced_alignment

def video_forced_alignment(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> VideoForcedAlignment

[view_source]

Create a VideoForcedAlignment node.

Aligns the transcription of spoken content in a video with its corresponding timecodes, facilitating subtitle creation.

image_content_moderation

def image_content_moderation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ImageContentModeration

[view_source]

Create a ImageContentModeration node.

Detects and filters out inappropriate or harmful images, essential for platforms with user-generated visual content.

text_summarization

def text_summarization(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextSummarization

[view_source]

Create a TextSummarization node.

Extracts the main points from a larger body of text, producing a concise summary without losing the primary message.

image_to_video_generation

def image_to_video_generation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ImageToVideoGeneration

[view_source]

Create a ImageToVideoGeneration node.

The Image To Video Generation function transforms a series of static images into a cohesive, dynamic video sequence, often incorporating transitions, effects, and synchronization with audio to create a visually engaging narrative.

video_understanding

def video_understanding(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> VideoUnderstanding

[view_source]

Create a VideoUnderstanding node.

Video Understanding is the process of analyzing and interpreting video content to extract meaningful information, such as identifying objects, actions, events, and contextual relationships within the footage.

text_generation_metric_default

def text_generation_metric_default(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextGenerationMetricDefault

[view_source]

Create a TextGenerationMetricDefault node.

The "Text Generation Metric Default" function provides a standard set of evaluation metrics for assessing the quality and performance of text generation models.

text_to_video_generation

def text_to_video_generation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextToVideoGeneration

[view_source]

Create a TextToVideoGeneration node.

Text To Video Generation is a process that converts written descriptions or scripts into dynamic, visual video content using advanced algorithms and artificial intelligence.

video_label_detection

def video_label_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> VideoLabelDetection

[view_source]

Create a VideoLabelDetection node.

Identifies and tags objects, scenes, or activities within a video. Useful for content indexing and recommendation systems.

text_spam_detection

def text_spam_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextSpamDetection

[view_source]

Create a TextSpamDetection node.

Identifies and filters out unwanted or irrelevant text content, ideal for moderating user-generated content or ensuring quality in communication platforms.

text_content_moderation

def text_content_moderation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextContentModeration

[view_source]

Create a TextContentModeration node.

Scans and identifies potentially harmful, offensive, or inappropriate textual content, ensuring safer user environments.

audio_transcript_improvement

def audio_transcript_improvement(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AudioTranscriptImprovement

[view_source]

Create a AudioTranscriptImprovement node.

Refines and corrects transcriptions generated from audio data, improving readability and accuracy.

audio_transcript_analysis

def audio_transcript_analysis(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AudioTranscriptAnalysis

[view_source]

Create a AudioTranscriptAnalysis node.

Analyzes transcribed audio data for insights, patterns, or specific information extraction.

speech_non_speech_classification

def speech_non_speech_classification(
asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SpeechNonSpeechClassification

[view_source]

Create a SpeechNonSpeechClassification node.

Differentiates between speech and non-speech audio segments. Great for editing software and transcription services to exclude irrelevant audio.

audio_generation_metric

def audio_generation_metric(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AudioGenerationMetric

[view_source]

Create a AudioGenerationMetric node.

The Audio Generation Metric is a quantitative measure used to evaluate the quality, accuracy, and overall performance of audio generated by artificial intelligence systems, often considering factors such as fidelity, intelligibility, and similarity to human-produced audio.

named_entity_recognition

def named_entity_recognition(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> NamedEntityRecognition

[view_source]

Create a NamedEntityRecognition node.

Identifies and classifies named entities (e.g., persons, organizations, locations) within text. Useful for information extraction, content tagging, and search enhancements.

speech_synthesis

def speech_synthesis(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SpeechSynthesis

[view_source]

Create a SpeechSynthesis node.

Generates human-like speech from written text. Ideal for text-to-speech applications, audiobooks, and voice assistants.

document_information_extraction

def document_information_extraction(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> DocumentInformationExtraction

[view_source]

Create a DocumentInformationExtraction node.

Document Information Extraction is the process of automatically identifying, extracting, and structuring relevant data from unstructured or semi-structured documents, such as invoices, receipts, contracts, and forms, to facilitate easier data management and analysis.

ocr

def ocr(asset_id: Union[str, asset.Asset], *args, **kwargs) -> Ocr

[view_source]

Create a Ocr node.

Converts images of typed, handwritten, or printed text into machine-encoded text. Used in digitizing printed texts for data retrieval.

subtitling_translation

def subtitling_translation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SubtitlingTranslation

[view_source]

Create a SubtitlingTranslation node.

Converts the text of subtitles from one language to another, ensuring context and cultural nuances are maintained. Essential for global content distribution.

text_to_audio

def text_to_audio(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextToAudio

[view_source]

Create a TextToAudio node.

The Text to Audio function converts written text into spoken words, allowing users to listen to the content instead of reading it.

multilingual_speech_recognition

def multilingual_speech_recognition(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> MultilingualSpeechRecognition

[view_source]

Create a MultilingualSpeechRecognition node.

Multilingual Speech Recognition is a technology that enables the automatic transcription of spoken language into text across multiple languages, allowing for seamless communication and understanding in diverse linguistic contexts.

offensive_language_identification

def offensive_language_identification(
asset_id: Union[str, asset.Asset], *args,
**kwargs) -> OffensiveLanguageIdentification

[view_source]

Create a OffensiveLanguageIdentification node.

Detects language or phrases that might be considered offensive, aiding in content moderation and creating respectful user interactions.

benchmark_scoring_mt

def benchmark_scoring_mt(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> BenchmarkScoringMt

[view_source]

Create a BenchmarkScoringMt node.

Benchmark Scoring MT is a function designed to evaluate and score machine translation systems by comparing their output against a set of predefined benchmarks, thereby assessing their accuracy and performance.

speaker_diarization_audio

def speaker_diarization_audio(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SpeakerDiarizationAudio

[view_source]

Create a SpeakerDiarizationAudio node.

Identifies individual speakers and their respective speech segments within an audio clip. Ideal for multi-speaker recordings or conference calls.

voice_cloning

def voice_cloning(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> VoiceCloning

[view_source]

Create a VoiceCloning node.

Replicates a person's voice based on a sample, allowing for the generation of speech in that person's tone and style. Used cautiously due to ethical considerations.

def search(asset_id: Union[str, asset.Asset], *args, **kwargs) -> Search

[view_source]

Create a Search node.

An algorithm that identifies and returns data or items that match particular keywords or conditions from a dataset. A fundamental tool for databases and websites.

object_detection

def object_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ObjectDetection

[view_source]

Create a ObjectDetection node.

Object Detection is a computer vision technology that identifies and locates objects within an image, typically by drawing bounding boxes around the detected objects and classifying them into predefined categories.

diacritization

def diacritization(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> Diacritization

[view_source]

Create a Diacritization node.

Adds diacritical marks to text, essential for languages where meaning can change based on diacritics.

speaker_diarization_video

def speaker_diarization_video(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SpeakerDiarizationVideo

[view_source]

Create a SpeakerDiarizationVideo node.

Segments a video based on different speakers, identifying when each individual speaks. Useful for transcriptions and understanding multi-person conversations.

audio_forced_alignment

def audio_forced_alignment(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AudioForcedAlignment

[view_source]

Create a AudioForcedAlignment node.

Synchronizes phonetic and phonological text with the corresponding segments in an audio file. Useful in linguistic research and detailed transcription tasks.

token_classification

def token_classification(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TokenClassification

[view_source]

Create a TokenClassification node.

Token-level classification means that each token will be given a label, for example a part-of-speech tagger will classify each word as one particular part of speech.

topic_classification

def topic_classification(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TopicClassification

[view_source]

Create a TopicClassification node.

Assigns categories or topics to a piece of text based on its content, facilitating content organization and retrieval.

intent_classification

def intent_classification(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> IntentClassification

[view_source]

Create a IntentClassification node.

Intent Classification is a natural language processing task that involves analyzing and categorizing user text input to determine the underlying purpose or goal behind the communication, such as booking a flight, asking for weather information, or setting a reminder.

video_content_moderation

def video_content_moderation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> VideoContentModeration

[view_source]

Create a VideoContentModeration node.

Automatically reviews video content to detect and possibly remove inappropriate or harmful material. Essential for user-generated content platforms.

text_generation_metric

def text_generation_metric(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextGenerationMetric

[view_source]

Create a TextGenerationMetric node.

A Text Generation Metric is a quantitative measure used to evaluate the quality and effectiveness of text produced by natural language processing models, often assessing aspects such as coherence, relevance, fluency, and adherence to given prompts or instructions.

image_embedding

def image_embedding(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ImageEmbedding

[view_source]

Create a ImageEmbedding node.

Image Embedding is a process that transforms an image into a fixed-dimensional vector representation, capturing its essential features and enabling efficient comparison, retrieval, and analysis in various machine learning and computer vision tasks.

image_label_detection

def image_label_detection(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ImageLabelDetection

[view_source]

Create a ImageLabelDetection node.

Identifies objects, themes, or topics within images, useful for image categorization, search, and recommendation systems.

image_colorization

def image_colorization(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ImageColorization

[view_source]

Create a ImageColorization node.

Image colorization is a process that involves adding color to grayscale images, transforming them from black-and-white to full-color representations, often using advanced algorithms and machine learning techniques to predict and apply the appropriate hues and shades.

metric_aggregation

def metric_aggregation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> MetricAggregation

[view_source]

Create a MetricAggregation node.

Metric Aggregation is a function that computes and summarizes numerical data by applying statistical operations, such as averaging, summing, or finding the minimum and maximum values, to provide insights and facilitate analysis of large datasets.

instance_segmentation

def instance_segmentation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> InstanceSegmentation

[view_source]

Create a InstanceSegmentation node.

Instance segmentation is a computer vision task that involves detecting and delineating each distinct object within an image, assigning a unique label and precise boundary to every individual instance of objects, even if they belong to the same category.

other__multipurpose_

def other__multipurpose_(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> OtherMultipurpose

[view_source]

Create a OtherMultipurpose node.

The "Other (Multipurpose)" function serves as a versatile category designed to accommodate a wide range of tasks and activities that do not fit neatly into predefined classifications, offering flexibility and adaptability for various needs.

speech_translation

def speech_translation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> SpeechTranslation

[view_source]

Create a SpeechTranslation node.

Speech Translation is a technology that converts spoken language in real-time from one language to another, enabling seamless communication between speakers of different languages.

referenceless_text_generation_metric_default

def referenceless_text_generation_metric_default(
asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ReferencelessTextGenerationMetricDefault

[view_source]

Create a ReferencelessTextGenerationMetricDefault node.

The Referenceless Text Generation Metric Default is a function designed to evaluate the quality of generated text without relying on reference texts for comparison.

referenceless_text_generation_metric

def referenceless_text_generation_metric(
asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ReferencelessTextGenerationMetric

[view_source]

Create a ReferencelessTextGenerationMetric node.

The Referenceless Text Generation Metric is a method for evaluating the quality of generated text without requiring a reference text for comparison, often leveraging models or algorithms to assess coherence, relevance, and fluency based on intrinsic properties of the text itself.

text_denormalization

def text_denormalization(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextDenormalization

[view_source]

Create a TextDenormalization node.

Converts standardized or normalized text into its original, often more readable, form. Useful in natural language generation tasks.

image_compression

def image_compression(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> ImageCompression

[view_source]

Create a ImageCompression node.

Reduces the size of image files without significantly compromising their visual quality. Useful for optimizing storage and improving webpage load times.

text_classification

def text_classification(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> TextClassification

[view_source]

Create a TextClassification node.

Categorizes text into predefined groups or topics, facilitating content organization and targeted actions.

asr_age_classification

def asr_age_classification(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AsrAgeClassification

[view_source]

Create a AsrAgeClassification node.

The ASR Age Classification function is designed to analyze audio recordings of speech to determine the speaker's age group by leveraging automatic speech recognition (ASR) technology and machine learning algorithms.

asr_quality_estimation

def asr_quality_estimation(asset_id: Union[str, asset.Asset], *args,
**kwargs) -> AsrQualityEstimation

[view_source]

Create a AsrQualityEstimation node.

ASR Quality Estimation is a process that evaluates the accuracy and reliability of automatic speech recognition systems by analyzing their performance in transcribing spoken language into text.