Karini AI Documentation
Go Back to Karini AI
  • Introduction
  • Installation
  • Getting Started
  • Organization
  • User Management
    • User Invitations
    • Role Management
  • Model Hub
    • Embeddings Models
    • Large Language Models (LLMs)
  • Prompt Management
    • Prompt Templates
    • Create Prompt
    • Test Prompt
      • Test & Compare
      • Prompt Observability
      • Prompt Runs
    • Agentic Prompts
      • Create Agent Prompt
      • Test Agent Prompt
    • Prompt Task Types
    • Prompt Versions
  • Datasets
  • Recipes
    • QnA Recipe
      • Data Storage Connectors
      • Connector Credential Setup
      • Vector Stores
      • Create Recipe
      • Run Recipe
      • Test Recipe
      • Evaluate Recipe
      • Export Recipe
      • Recipe Runs
      • Recipe Actions
    • Agent Recipe
      • Agent Recipe Configuration
      • Set up Agentic Recipe
      • Test Agentic Recipe
      • Agentic Evaluation
    • Databricks Recipe
  • Copilots
  • Observability
  • Dashboard Overview
    • Statistical Overview
    • Cost & Usage Summary
      • Spend by LLM Endpoint
      • Spend by Generative AI Application
    • Model Endpoints & Datasets Distribution
    • Dataset Dashboard
    • Copilot Dashboard
    • Model Endpoints Dashboard
  • Catalog Schemas
    • Connectors
    • Catalog Schema Import and Publication Process
  • Prompt Optimization Experiments
    • Set up and execute experiment
    • Optimization Insights
  • Generative AI Workshop
    • Agentic RAG
    • Intelligent Document Processing
    • Generative BI Agentic Assistant
  • Release Notes
Powered by GitBook
On this page
  • Chat
  • Webhook
  • Start
  • Knowledge Base
  • Connector
  • Processing
  • Router
  • Prompt
  • Agent
  • End
  • Sink
  • Transform
  • Save and publish recipe
  1. Recipes
  2. Agent Recipe

Set up Agentic Recipe

PreviousAgent Recipe ConfigurationNextTest Agentic Recipe

Last updated 1 month ago

Agent 2.0 recipes can be initiated with either a Chat node or a Webhook node, depending on the specific use case and the desired interaction flow

Chat

This node is designed to initiate and facilitate user interactions, serving as the entry point for conversational engagement.

Conversation History:

Determine the number of messages to retain in the conversation history. This allows you to augment the prompt context with past conversation, improving the response quality.

Generate follow up questions:

Enable Generate follow-up questions to prompt the system to autonomously generate relevant questions based on the conversation context and generated answer. To use this option, the Follow Up question generator model must be configured in the.

You are provided with a sample prompt for this task; however, you can update the as required.

Generate follow-up questions based on the given input question and answer. Follow these guidelines:

1. Number of Questions: Include 2-3 short follow-up questions that directly expand on or clarify the main question.
2. Relevance is Key: Ensure these questions are relevant, showing an understanding of the initial question and its broader implications.
3. Emphasis on Follow-Ups: Always aim to provide follow-up questions to foster deeper dialogue, use the given answer as a basis for further exploration.

Reference Question: {question}
Reference Answer: {answer}

Output Format:
Question-1: [provide followup question here] 
Question-2: [provide followup question here]

Question-3: [provide followup question here]

Enable Audio Mode

When Audio mode is enabled, the audio option becomes available on Copilots. This allows users to interact through voice queries and receive responses in both text and audio formats, enhancing accessibility and user experience.

To integrate the Chat node effectively, follow these configurations based on document handling requirements:

  • With Document Handling : Establish a connection between the Chat and Processing nodes when document uploading, processing, and querying are essential for interaction.

  • Without Document Handling : Directly link the Chat node to the Start node if the workflow does not require document-based references, enabling a standard conversational experience.

Webhook

A webhook is a user-defined HTTP callback that allows the system to send automated messages or data to an external URL when an event occurs.

Here are the key elements of a webhook node.

Label: Serves as a unique identifier for the webhook, making it easy to reference or manage.

Webhook URL: The endpoint to which the webhook sends HTTP requests. It directs the data to the appropriate destination.

Webhook Token: Used for authentication to ensure that the request made by the webhook is valid and authorized to access the API.

Query Method: Specifies the HTTP method (such as POST) used for the request to the API.

Webhook Query Headers: Defines the headers included in the request, often containing metadata like content type, authorization, or other necessary information for the API to process the request.

The webhook request must include the following headers for authentication and content type specification:

{
    "Content-Type": "application/json",
    "x-api-token": "token_value"
}

Payload Template: A predefined structure or format for the data sent with the request. It helps in organizing the information and ensuring that the API receives the correct data structure.

{
    "files": [
        {
            "content_type": "application/pdf",
            "file_name": "test1.pdf",
            "file_content": "(base64data)"
        },
        {
            "content_type": "application/pdf",
            "file_name": "test2.pdf",
            "file_path": "s3://bucket/path"
        },
        {
            "content_type": "text/plain",
            "file_name": "test.txt",
            "text": "this is a plain text"
        }
    ],
    "input_message": "",
    "metadata": {}
}

To ensure optimal functionality of the Webhook node, adhere to the following configuration guidelines:

  • With Document Processing : Connect the Webhook node to the Processing node when document uploading, processing, or querying is required before external data transmission.

Start

The Start node serves as the entry point of the workflow, initiating the flow of tasks by connecting to various functional nodes based on specific process requirements.

It can link to the Knowledge base node for information retrieval, the Router node for directing execution based on logic, the Prompt node for generating responses, the Custom function node for executing predefined tasks, the Agent node for intelligent automation, and the Transform node for enabling parallel processing. This flexibility allows workflows to be dynamically structured according to operational needs, ensuring efficient execution and automation.

Knowledge Base

A Knowledge base is a systematically structured repository designed to store, organize, and manage information, enabling applications to retrieve relevant data efficiently.

The system supports the following two types of knowledge bases:

The configuration fields include:

  • Knowledge Base ID: Enter the ID of the knowledge base.

  • Filter Key: Specifies the criteria for filtering results.

  • Filter Value: Defines the specific value to filter by.

  • Number of Results: Specifies how many responses should be retrieved.

The following state flags need to be configured based on the use case .

State settings

State settings control data access within the workflow, ensuring that nodes can retrieve and process relevant information.

  1. Document Cache: Provides access to shared document information across nodes.

    1. Retrieve Documents: Fetches the entire document based on a specified filename or file path. Useful for accessing full document content from the knowledge base

    2. Retrieve Chunks: Fetches specific document chunks based on semantic similarity, ideal for retrieving only relevant parts of a document related to the query.

    3. Ephemeral: Pass context as raw text directly with the query without storing it in the knowledge base.

  2. Messages: Accesses the conversation history for processing, with options for full, last, or specific node messages.

    1. All Messages: Accesses the entire conversation history, allowing the node to consider all previous interactions for context.

    2. Last Message: Accesses only the most recent message in the conversation, useful for nodes that need to respond to the latest input.

    3. Node Message: Accesses messages from a specific node in the workflow, ideal for retrieving targeted information shared by a particular node.

  3. Metadata: Provides access to webhook metadata from connected APIs, enabling external data flow.

  4. Scratchpad: A Scratchpad in agents refers to a temporary storage area where an AI agent keeps track of its intermediate thoughts, steps, or actions while processing a task. It helps the agent plan, reason, and track progress when solving complex problems, especially in multi-step reasoning or decision-making scenarios.

The Knowledge Base can be connected to various functional nodes based on workflow requirements. It can be linked to Processing node, Router node, Prompt node, Custom Function node, Agent node, End node, Transform node.

Connector

The Connector node functions as an interface facilitating data exchange between the system and external storage solutions.

There are two available connector types, as listed below.

  • Amazon S3 : Enables integration with Amazon Simple Storage Service (S3) for retrieving or storing data.

  • In Memory Base64 Data : Supports handling Base64-encoded data stored in memory for temporary or intermediate processing.

The Connector node can be linked to either the Start node or the Processing node, depending on the workflow requirements.

Processing

The Processing node is employed in the agent recipe to enable file uploads to Copilot for querying. It streamlines the processing of uploaded files by providing configurable options designed to support specific data extraction and privacy requirements.

Karini AI recipes support following preprocessing options:

Enable Transcriptions: Enables automatic transcription, facilitating the conversion of audio or speech-based files into text.

  • Amazon Transcribe: Amazon Transcribe is an automatic speech recognition service that uses machine learning models to convert audio to text.

OCR Options: This option provides various methods for extracting text from documents and images:

  • Unstructured IO with Extract Images: This method is used for extracting images from unstructured data sources. It processes unstructured documents, identifying and extracting images that can be further analyzed or used in different applications.

  • PyMuPDF with Fallback to Amazon Textract: This approach utilizes PyMuPDF to extract text and images from PDF documents. If PyMuPDF fails or is insufficient, the process falls back to Amazon Textract, ensuring a comprehensive extraction by leveraging Amazon's advanced OCR capabilities.

  • Amazon Textract: The selected OCR method is Amazon Textract, a cloud-based service that identifies and extracts text, structured data, and elements like tables and forms from documents.

    • Extract Layouts:

      • This option helps recognize the structural layout of a document, such as:

        • Headings

        • Paragraphs

        • Columns

      • Useful for document formatting retention.

    • Extract Tables:

      • This option allows structured table extraction, preserving row and column relationships.

      • Useful for processing invoices, reports, and tabular data.

    • Extract Forms:

      • this setting extracts key-value pairs from documents, such as:

        • Form fields and their corresponding values.

      • Useful for processing application forms, contracts, and structured documents.

  • Tesseract: An open-source OCR engine for extracting text from images and PDFs.

    • The VLM Prompt provides a predefined instruction set guiding the AI model on how to analyze the image and what details to extract.

    • The VLM Prompt instructs the system to analyze an image in-depth, extracting all visible text while preserving structure and order. Additionally, it provides a detailed description of diagrams, graphs, or scenes, explaining components, relationships, and inferred meanings to ensure a comprehensive textual representation of the image.

    The VLM prompt is defined as follows:

Instruction: Please analyze the attached image thoroughly and provide a detailed textual report that includes the following:

1. Extracted Text:

   - Extract all text present in the image.
   - This includes any visible text in labels, signs, titles, annotations, captions, legends, or embedded within diagrams, flowcharts, graphs, or any other elements.
   - Preserve the order and structure as it appears in the image.

2. Detailed Description:

   - Describe everything that is happening or depicted in the image.
   - For flowcharts or diagrams:
     - Explain each component, including shapes (e.g., rectangles, diamonds), connectors (e.g., arrows, lines), and how they relate to each other.
     - Describe the process flow, decision points, inputs, outputs, and any loops or cycles.
   - For graphs or charts:
     - Identify the type of graph (e.g., bar chart, line graph, pie chart).
     - Describe the axes, labels, data points, trends, and any significant patterns or anomalies.
   - For scenes or images with objects:
     - Describe all objects, people, animals, and elements present.
     - Include details about their appearance, positions, actions, expressions, and interactions.
   - Mention any colors, shapes, and sizes that are relevant to understanding the image.
   - Describe the background and setting to provide context.

3. Interpretation and Contextual Information:

   - Provide any inferred meanings, implications, or conclusions that can be drawn from the image.
   - Explain the purpose or function of the diagram, flowchart, or scene if apparent.
   - If the image represents a concept, process, or data, elaborate on what it signifies.

Formatting Guidelines:

- Organize your response into clear sections with headings: "Extracted Text," "Detailed Description," and "Interpretation."
- Use bullet points or numbered lists where appropriate for clarity.
- Ensure that the description is comprehensive and allows someone to fully understand the image without seeing it.

PII Masking Options: To mask Personally Identifiable Information (PII) within your dataset, enable the PII Masking option. You can specify the entities to be masked by selecting from the available list, ensuring secure data preprocessing.

Connect the Processing node to the Start node to initiate workflow execution.

Router

Router directs the workflow to the next node based on conditions or input, enabling dynamic branching paths within the agent recipe. This ensures that the workflow can adapt based on the given data or context, enhancing flexibility and decision-making in the process.

Node Methods: The Router supports three node methods for decision-making.

  • Default: This method follows the standard routing logic, processing data without additional customization. It adheres to a predefined flow, ensuring consistency and simplicity for straightforward workflows that don't require dynamic decision-making.

Using the Default method, routing conditions can be assigned to edges to determine the appropriate node for processing the request.

There are two available options:

  • Default Routing: which applies when no specific conditions are met.

  • Custom Routing-:where you can define explicit conditions for each edge in the provided text box.

The Default Routing Condition can be assigned to only one edge within the routing configuration.

  • Prompt:

    • The Prompt method enables the selection of a predefined prompt for the Router node from the available prompt list.

    • The selected prompt contains instructions that guide the router on how to direct the workflow.

    • The router will evaluate the input and route the workflow accordingly based on the prompt’s logic.

    • Here is the provided sample prompt:

You will receive a set of input messages containing a task. Your role is to analyze the latest (last) input message and provide the correct routing output based on that task. Use previous messages in the conversation as context only if necessary, but prioritize the latest (last) message for decision-making. Follow these instructions carefully:

1. If the latest input **starts with coding:**, respond with: ```{{"next": "Policy Agent"}}```
2. If the latest input **starts with sales data:**, respond with: ```{{"next": "Enterprise Sales Agent"}}```
3. If the latest input **starts with QNA:**, respond with: ```{{"next": "Bot QnA"}}```


Provide only the JSON output based on the identified task. Only response with route options, don't say anything else.

Only published prompts, along with current versions and associated models, are visible within the recipe.

  • Lambda: The Lambda method integrates AWS Lambda functions to execute custom logic.

    • Lambda ARN : Enter the Amazon Resource Name (ARN) of the Lambda function you want to invoke. This uniquely identifies the Lambda function within AWS.

    • Input test payload: This is a sample payload that will be used to test the Lambda function. This helps ensure the function behaves as expected with the provided input.

    • Test button : Enables you to validate the function by executing the test payload.

Invoke Retries specifies the number of retry attempts when an execution fails, ensuring improved reliability and fault tolerance in processing.

Connect the Router node to Knowledge Base, Agent, Prompt, Custom Function or Transform nodes based on the specific use case.

Prompt

The Prompt Node is enabling the system to execute logic-driven actions based on predefined prompts. You can select a prompt from the existing prompts in the prompt playground to add to the recipe.

Once a prompt is selected, the system displays the associated primary and fallback models, along with guardrails attached if they were configured in the Prompt Playground.

Additionally, you can navigate to the Prompt Playground by clicking the redirect icon, allowing them to view the prompt details and make necessary modifications.

The following image illustrates the version and redirect icon.

To switch to a specific version, click on the displayed version. This will generate a list of all associated versions. Select the required version, and the system will load the complete prompt details corresponding to the selected version.

The following image displays the associated versions.

Only published prompts are visible within the recipe.

Refer sample prompt.

Answer the question based on the context below. Keep the answer short and concise. Respond "Unsure about answer " if not sure about the answer.

{context}

{question}.

The Guardrail option is available on this tile. You can choose from existing guardrails in the Prompt Playground, which will be reflected in the recipe upon prompt selection. Alternatively, you may enable the default guardrail configured at the organizational level.

Invoke Retries specifies the number of retry attempts when an execution fails, ensuring improved reliability and fault tolerance in processing.

Metadata:

Metadata refers to supplementary information that provides context, details, or additional insights about a specific process, event, or interaction. Within the framework of prompts, metadata enables the dynamic exchange of external data, facilitating more informed and context-aware decision-making.

For metadata to be utilized, the prompt must explicitly reference the {metadata} variable within its body

Metadata includes user-specific data, enabling the agent to personalize interactions and enhance user experience. Metadata can be reviewed in the copilot history.

Ensure that the scratchpad is enabled and that the prompt includes Scratchpad as a variable.

Connect Prompt node to End node in recipe.

Agent

Select an agent prompt from the available options. Each prompt encompasses pre-configured tools and settings essential for processing inquiries and responding with effectiveness.

Once an agent is selected, the system displays the associated primary and fallback models along with current version. Additionally, you can navigate to the Prompt Playground by clicking the redirect icon, allowing them to view the agent details and make necessary modifications.

The following image illustrates the version and redirect icon.

To switch to a specific version, click on the displayed version. This will generate a list of all associated versions. Select the required version, and the system will load the complete agent details corresponding to the selected version.

The following image displays the associated versions.

Only published agent prompt are visible within the recipe.

Refer sample agent prompt.

You are a vehicle guide expert specializing in automotive manuals and answering queries based on the provided context. 

The context includes details on Keys, Doors, and Windows; Seats and Restraints; Storage; Instruments and Controls; Lighting; Infotainment System; Climate Controls; Driving and Operating; Vehicle Care; Service and Maintenance; Technical Data; Customer Information; Reporting Safety Defects; OnStar; Connected Services; 

 Only use the tools to answer the user's query. 

Your primary role is to assist users with vehicle-related inquiries, provide accurate information from the context, and send email notifications if requested.

Key Responsibilities:
{key_responsibilities}

Important Instructions:
{important_rules}

Guidelines for User Interactions:
{interaction_guidelines}

Invoke Retries specifies the number of retry attempts when an execution fails, ensuring improved reliability and fault tolerance in processing.

Metadata:

Metadata refers to supplementary information that provides context, details, or additional insights about a specific process, event, or interaction. Within the framework of agents, metadata enables the dynamic exchange of external data, facilitating more informed and context-aware decision-making.

For metadata to be utilized, the agent must explicitly reference the {metadata} variable within its body

Metadata includes user-specific data, enabling the agent to personalize interactions and enhance user experience. Metadata can be reviewed in the copilot history.

Ensure that the scratchpad is enabled and that the agent prompt includes Scratchpad as a variable.

Connect Agent node to End node in recipe.

End

Marks the conclusion of a workflow, signaling that no further actions are required.

Sink

Integrating the Sink node into a workflow facilitates the secure storage, export, or transmission of processed data from upstream nodes to a designated destination. This ensures that the final output is efficiently managed, preserved, and made available for further use or analysis. The Sink node is designed to seamlessly store structured or unstructured data in cloud storage solutions like Amazon S3 or database.

There are four output types:

  • Connector

    Sends the output to an external service through a configured connector, enabling seamless integration with systems outside the platform. This is particularly useful for storing, forwarding, or processing data externally.

    When using a connector such as Amazon S3, the following configuration details are required:

    • S3 Bucket Path : Provide the target Amazon S3 bucket path where the data will be stored.

    • File Name Pattern : Allows you to define a structured naming pattern for saved files.

Filename Pattern Guide:

Define custom filenames for saved files using dynamic placeholders. Use the following placeholders to define your filename: 1 - {filename_prefix} → File name prefix. 2 - {current_datetime} → The timestamp when the file is saved. 3 - {metadata.KEY} → Metadata value (replace KEY with an actual key)

Examples:

  • "processed/{filename_prefix}{metadata.project}{current_datetime}.json"

  • "processed/{filename_prefix}{current_datetime}.json"

Note: Files are saved in .json format by default.

    • Save Raw File : The raw file is saved without additional processing.

    • Message : Retrieves only the most recent message(Last message) in the conversation, making it ideal for nodes that require processing the latest user input.

Connect the Prompt, Agent, Custom Function, and Transform nodes to the Sink node as required based on workflow needs.

  • Lambda : Data will be pre-processed within an AWS Lambda function before transmission to the output , enabling dynamic transformation such as formatting, filtering, enrichment, and other rule-based modifications to ensure data integrity and compliance with business logic.

    • Lambda ARN : Enter the Amazon Resource Name (ARN) of the Lambda function you want to invoke. This uniquely identifies the Lambda function within AWS.

    • Input Variables:

      • Defines the parameters or variables to be passed to the Lambda function.

      • These variables allow for dynamic data handling and contextual processing.

    • Input test payload: This is a sample payload that will be used to test the Lambda function. This helps ensure the function behaves as expected with the provided input.

    • Test button : Enables you to validate the function by executing the test payload.

    • Message : Retrieves only the most recent message(Last message) in the conversation, making it ideal for nodes that require processing the latest user input.

  • Knowledge Base : When the Output Type is set to Knowledge Base. In this case, the output from the recipe is directed to a knowledge base for storage or future retrieval.

    • Key configuration settings for this setup include:

      • Dataset: You are required to select the dataset where the output will be stored within the knowledge base.

      • Type: The type selection defines how the data will be processed or indexed within the knowledge base. In this case, OpenSearch is present, which integrates with OpenSearch for indexing and querying stored data. This setup ensures that data is searchable and can be easily retrieved when needed.

      • State Settings: Configures data access within the workflow, controlling what information nodes can retrieve and process..

        • Message : Retrieves only the most recent message(Last message) in the conversation, making it ideal for nodes that require processing the latest user input.

  • Knowledge Graph: In this setup, the final output of the recipe is directed into a knowledge graph system for structured data storage, enabling complex relationships and querying between entities.

    • Key elements in the configuration are:

      • Type: The Neptune refers to Amazon Neptune, a managed graph database service optimized for storing and querying highly connected data. This configuration ensures that the data is structured and stored in a graph format, making it suitable for complex querying, analytics, and relationship mapping.

      • Dataset: The Dataset field requires you to select a dataset where the data will be stored within the graph database. This helps categorize and organize the data appropriately.

      • State Settings: Configures data access within the workflow, controlling what information nodes can retrieve and process..

        • Message : Retrieves only the most recent message(Last message) in the conversation, making it ideal for nodes that require processing the latest user input.

Transform

The Transform module provides a Split and Merge node that enables users to manipulate data by either splitting it into smaller chunks or merging multiple segments. This functionality is particularly useful in data processing workflows where structured transformation of information is required.

There are two available node methods, listed as follows:

  • Split : When using the Split node method, input data is divided based on a user-specified strategy. The split operation enhances data processing, retrieval, and transformation by ensuring that each segment adheres to the selected criteria. The method supports various strategies for data segmentation, including

    • Character- Splits the data at the character level.

    • Words- Divides the text into word-based segments.

    • Lambda- Uses a custom AWS Lambda function to determine the split logic.

      • Transform Type : Lambda function receives an event payload where the entire input is wrapped under the key input. Ensure your function extracts and processes data accordingly.

      • Lambda ARN : Enter the Amazon Resource Name (ARN) for the AWS Lambda function that will process and split the data.

      • Input Test Payload: Enter test data to validate the Lambda function’s behavior before deployment.

      • Test Button : Allows you to execute a test run of the configured Lambda function for validation.

      • Overwrite Credentials (Optional) → Allows you to override existing authentication settings with new credentials.

      • Pages - Splits content based on document pagination.

    • Chunk Size: The Number of characters or words or pages allowed for each chunk.

    • Scratchpad:. It serves as a temporary storage or intermediary space for processing and managing data within the workflow. The Split operation utilizes an input method, meaning it receives the output from the preceding Scratchpad as its input for the current Split process.

  • Merge: The Merge node in the Transform module is used to combine multiple data segments into a unified structure. This is particularly useful when working with split data that needs to be reconstructed or when consolidating multiple data sources into a single format.

    • Merge Strategy :The Merge strategy determines the format in which the data will be merged. The available options include:

      • JSON → Combines data in a structured JSON format.

      • Lambda → Utilizes a custom AWS Lambda function to programmatically merge data.

        • Lambda ARN: Provide AWS Lambda function’s Amazon Resource Name (ARN).

        • Input Test Payload: Sample input data to test the transformation logic.

        • Test Button: Allows you to validate the function’s processing behavior.

        • Overwrite Credentials (Optional) → Allows you to override existing authentication settings with new credentials.

  • Text : Merges content into a plain text format.

  • Merge method -The Merge method determines how the merging process handles existing data. Two options are available:

    • Overwrite- Replaces any existing data with the newly merged data. This ensures that only the most recent merged version is retained.

    • Append- Instead of replacing, new data is added to an existing array or list within the JSON structure.

  • Scratchpad - Temporary storage is used to retain intermediate data before final output.

    • Output: The merged data is written to an output location.

      • Method Selection:

        • Overwrite: Replaces the existing data.

        • Extend: Allows appending new data instead of replacing it.

Save and publish recipe

Saving the recipe preserves all configurations and connections made in the workflow for future reference or deployment.

Once a recipe is created and saved, you need to publish it to assign it a version number.

Refer the following video to create Agent with chat node.

Refer the following video to create Agent with webhook node.

You must have Speech to Text Model Endpoint and Text to Speech Model Endpoint configured in the.

Native Knowledge Base: Choose a dataset from the available dataset list. After selecting the dataset, the relevant prompt contexts can be configured to retrieve information from the vector store. For detailed guidance, refer to

Bedrock Knowledge Base - The Bedrock Knowledge Base refers to an AI-powered retrieval system offered by Amazon Bedrock, a service by AWS (Amazon Web Services). It allows users to integrate enterprise knowledge bases with AI-powered applications, enabling natural language queries on stored data. For more details refer

Overwrite Credentials Option : By default, the AWS credentials configured in settings will be used to invoke AWS resource. However, if needed, you can provide alternate AWS credentials.

Overwrite Credentials :By default, the AWS credentials configured in settings will be used to invoke AWS resources. However, if needed, you can provide alternate AWS credentials

Default: Using the Default method, the OpenAI Whisper model is employed and should be selected as the Speech-to-Text Model Endpoint for transcription tasks in the page.

VLM: A specialized method for processing and extracting text from images or documents using visual language models. You must have VLM configured in the.

For more details, refer to this .

Overwrite credentials option → By default, the AWS credentials configured in settings will be used to invoke AWS resource. However, if needed, you can provide alternate AWS credentials

The are detailed in the preceding section; refer to it for comprehensive information.

The are detailed in the preceding section; refer to it for comprehensive information.

Once you've selected the prompt, the canvas will reveal the tools and configurations integrated into the agent prompt. You can update the configurations for the preset tools. You cannot add or delete an agent tool from the recipe canvas. In order to edit tool types, or add or delete tools, you need to edit the in the prompt playground. These tools empower the agent to analyze queries thoroughly and generate precise responses.

The are detailed in the preceding section; refer to it for comprehensive information.

Organization
Organization
Context Generation using Vector Search.
Amazon Bedrock Knowledge Bases.
Organization
Organization
Organization
Organization
PII entities
Organization
agent prompt
State settings
State settings
State settings