Create Agent Prompt
A variety of agent prompt templates are available, enabling you to select a predefined template and efficiently generate a prompt without the need for manual creation. These templates optimize the prompt development process, ensuring consistency and operational efficiency in agent interactions. Users can seamlessly choose the most appropriate template based on their specific requirements and integrate it directly within the system.
If the you want to create a new prompt, they can refer to the following steps:
To create a new agent prompt, follow the following steps:
On the Prompt Playground, start by clicking Task and selecting an Agent 2.0 task from the list of available tasks.
Set up a unique name for the agent prompt.
Max State updates: Defines how often the AI agent can refine its response.The default is set to 3.
Create a prompt in natural language and define the instructions we want the agent to follow.
Variables: Allows the insertion of dynamic input parameters using curly braces, like {Variable}.
We use double curly braces to represent JSON in prompt to prevent it from being interpreted as a variable.
The Agent Input provides user input data that the Agent will process.
Save the prompt.
Here is an example of an Agent prompt.
After saving the agent prompts, the Tool tab becomes accessible. Navigate to the Tool tab to configure the required settings.
Configuring Agent Tools
To configure an agent tool, go to the "Tools" tab on the prompt, and start by selecting a tool type. Currently, following tool types are supported:
Agent
The Agent tool in Agent prompt allows you to configure and manage Karini AI agents by selecting predefined prompts integrated with various Large Language Models (LLMs). It lists available agent prompts linked to LLMs, enabling users to set them up as tools for specific tasks. Setting up these agent prompts allows you to define structured workflows, automate responses, and enhance AI-powered decision-making. Once set up, the agent tool facilitates seamless interaction between the AI and external services, databases, or APIs, ensuring more efficient, context-aware, and intelligent responses. This functionality streamlines AI automation by leveraging LLMs to perform tasks dynamically based on user inputs and predefined logic.
Only published agents are available as tools. When an agent is selected, the system displays its current version along with a redirect icon, allowing users to navigate to the prompt playground for detailed agent details and make necessary modifications.
The following image illustrates the version and redirect icon.
To switch to a specific version, click on the displayed version. This will generate a list of all associated versions. Select the required version, and the system will load the complete agent details corresponding to the selected version.
The following image displays the associated versions.
Catalog
A catalog is a structured data repository that organizes information in a predefined format, ensuring easy retrieval and processing. Karini AI agents use catalogs to query relevant datasets, enhancing response accuracy and decision-making. You can select from existing catalogs to integrate as a tool, enabling seamless interaction with structured data sources. For detailed instructions on creating catalog schemas, refer to the Catalog Schemas Documentation.
Database
To use the catalog tool, you must also configure the database tool. This tool allows you to execute the SQL queries you generate and retrieve the results from the database. You can provide the SQL query along with the database name and table name as input to this tool, which will return the query results.
These are the following databases available in the list:
Athena
MySQL
PostgreSQL
Snowflake
Redshift
MS SQL
Oracle
Databricks Unity Catalog
Teradata.
Select the appropriate database required for the catalog chosen in the Agent prompt. After selection, enter the corresponding credentials for the selected database.
Knowledge Base
Knowledgebase efficiently stores and retrieves data using PostgreSQL. It saves embeddings and corresponding data in the database, enabling quick and effective knowledge retrieval.
You set up PostgreSQL credentials to connect to a database and choose an embedding model for AI-powered queries.
Postgres Credentials
Host (Required): Specifies the server address where the PostgreSQL database is hosted.
User Name (Required): The username required to authenticate and connect to the PostgreSQL database.
User Password (Required): The password associated with the specified username for secure access.
Database: The name of the PostgreSQL database to connect to.
Port: Defines the port number used to connect to PostgreSQL (default: 5432).
Embeddings Model (Required): Users can select an embedding model for AI-driven vector searches. This model is used for semantic search and querying embeddings stored in the database.
Top K: Specifies the number of top results (K) to retrieve when performing an embedding-based query.
Example: 3 means the top 3 most relevant matches will be returned.
Dataset (Vector Store)
Ingestion of your data into the vector store happens during recipe "run" phase when the data ingestion pipeline is executed. You can select from an existing dataset that has been ingested into the vector store.
Context Generation using Vector Search
You can select from the following retrieval options to obtain information from the vector store.
Use embedding chunks
Summarize chunks
Use document text for matching embeddings
You can provide Top-K also enable reranking of retrieved embeddings. To enable reranking, you must configure the reranker LLM credentials in the Organization settings.
Advanced Query Reconstruction
Additionally, you can select advanced query reconstruction options to enable query rewrite for optimized and accurate search results. Refer to Dataset for details.
Multi query rewrite: Enhance your search with multiple variations of your query to capture diverse perspectives and improve retrieval accuracy.
Query expansion: Enriches query by automatically adding related terms or phrases to retrieve more comprehensive and relevant results. Expand your search query to include additional information that can assist in answering your query effectively.
Enable ACL restriction
When you enable this feature, the knowledge base will first be filtered based on the user's ACL (Access Control List) permissions. This ensures that only content the user is allowed to access is considered before performing the retrieval using semantic similarity. This step enhances security by limiting retrieval to permissible information.
Enable dynamic Metadata filtering
EXPERIMENTAL FEATURE: When enabled, this feature uses a Large Language Model (LLM) to automatically generate custom metadata filters by analyzing metadata keys and the input query. These filters are then applied to narrow down the knowledge base before performing retrieval with semantic similarity. This is an experimental feature designed to optimize retrieval based on dynamic and context-aware metadata filtering.
Prompt (LLM)
Prompt tool enables an agent to perform actions using another LLM, with prompt instructions. You can select one of the existing prompts from the list, and configure it as a tool. The LLM associated with the prompt will be used to carry out the actions for the tool as per the prompt instructions.
Input Schema: For the selected prompt, an input schema is automatically generated. The LLM configured as Natural Language Assistant in the Organization setting is used for this schema generation. You have an option to edit the schema if necessary.
Only published prompts are available as tools. When a prompt is selected, the system displays its current version along with a redirect icon, allowing you to navigate to the prompt playground for detailed agent details and make necessary modifications.
The following image illustrates the version and redirect icon.
To switch to a specific version, click on the displayed version. This will generate a list of all associated versions. Select the required version, and the system will load the complete agent details corresponding to the selected version.
The following image displays the associated versions.
REST API
You can have an agent invoke a REST API as a tool. To configure this tool, you need to provide a REST API URL and method (POST, GET, PUT) .
REST URL: The REST URL is the endpoint address where the REST API is hosted. This URL specifies the exact location of the resource that you want to interact with via the API.
Example:
Method: This defines the type of HTTP request method to use when calling the API. supported methods include GET, POST and PUT.
Input Schema: This is the structure of the expected input data, often defined in a JSON format. It specifies the required fields and data types. You can input this value, or have it auto-generated using Setup Schema option.
Example:
Setup Schema: You have option to automatically generate the schema for the REST API using the test payload. The LLM configured as Natural Language Assistant in the Organization setting is used for this schema generation. You have an option to edit the schema if necessary.
Input Test Payload: This is a sample payload that adheres to the input schema and is used to test the API endpoint. It demonstrates the format and type of data that should be sent in a request.
Example
URL Credentials: These are the authentication details required to access the API.
User name(optional)
User password
API Token
Knowledge Graph
The Knowledge Graph tool in Agent prompts enables Karini AI agents to access structured and interconnected knowledge base stored in a Neo4j database. The tool enhances data retrieval by using vector-based indexing and understanding semantic relationships between data points. It integrates with Neo4j, where structured data is stored as vector indexes. This allows agents to fetch relevant information based on context rather than just relying on keyword searches, making the retrieval process more intelligent and efficient.
To enhance semantic search capabilities, users must select an embedding model, which helps the agent understand relationships between entities and provide more relevant responses. When an agent prompt is executed, the tool queries the Knowledge Graph, retrieving relevant insights based on natural language inputs. This functionality supports AI-driven decision-making, document retrieval, and contextual reasoning.
Input Schema: The tool includes a predefined input schema that structures data for efficient retrieval.
Embedding Model Selection: You must select an embedding model to enable semantic search and contextual understanding.
Neo4j Credentials: A secure connection to the Neo4j database requires users to configure the necessary credentials.
URI: Specifies the Neo4j database connection endpoint (e.g., neo4j://host:port).
Database Name: Defines the specific Neo4j database to interact with.
Username: The credential for authenticating with the Neo4j database.
User Password: The password associated with the username for secure access.
Processing Region: Specifies the region for data processing (default: us-east-1).
Connection Testing: The Test Neo4j Connection button allows you to verify the connection and ensure the database is accessible before execution.
Lambda
The Lambda tool in agent prompts allows you to configure and invoke AWS Lambda functions directly from the agent interface. This means you can invoke serverless functions seamlessly to perform agent actions as a tool.
Lambda ARN: Enter the Amazon Resource Name (ARN) of the Lambda function you want to invoke. This uniquely identifies the Lambda function within AWS.
Input Schema: Specify the JSON schema that outlines the structure of the input data your Lambda function expects. This includes defining the required fields and their data types. You can input this value, or have it auto-generated using Setup Schema option.
Setup Schema: You have option to automatically generate the schema for the Lambda function using the test payload. The LLM configured as Natural Language Assistant in the Organization setting is used for this schema generation. You have an option to edit the schema if necessary.
Example:
Input Test Payload: This is a sample payload that will be used to test the Lambda function. This helps ensure the function behaves as expected with the provided input.
Example:
Overwrite Credentials: By default, the AWS credentials configured in Organization settings will be used to invoke AWS resource. However, if needed, you can provide alternate AWS credentials to invoke the Lambda function.
Amazon Q Retriever
The Amazon Q Retriever tool in Agent Prompt is a powerful AI-driven retrieval mechanism that enables agents to access, process, and retrieve relevant information from structured knowledge bases stored in Amazon Q services. This tool is particularly useful in scenarios where AI agents need to fetch precise information from structured sources like enterprise knowledge bases, FAQs, technical documentation, or customer support data. It is designed to connect with Amazon Q, leveraging its capabilities to search, index, and return contextually relevant results based on user queries.
To use the Amazon Q Retriever tool, you must first configure the required credentials and parameters. The tool requires authentication details and database connection parameters to access the knowledge base securely.
Refer for more details Amazon Q Retriever documentation.
You must enter:
Amazon Q Application ID (Required)
This is the unique identifier for the Amazon Q application that the retriever is associated with.
It ensures that the queries are executed within the correct Amazon Q environment.
Amazon Q Retriever ID (Required)
The retriever ID specifies which retriever instance will be used.
Amazon Q uses retrievers to fetch relevant documents and responses from knowledge bases.
Client ID (Required)
The Client ID is used for authentication to securely connect with the Amazon Q service.
It ensures that the tool accesses data with the appropriate permissions.
Client Secret (Required)
A secret key is associated with the Client ID for authentication.
This adds an additional layer of security when accessing Amazon Q services.
Issue URL (Required)
This is the authorization endpoint where the credentials are verified.
Amazon Q validates authentication requests through this URL before granting access.
Email (Required)
The email address is linked to the user account, ensuring access control and notifications.
Role ARN (Amazon Resource Name) (Required)
The IAM Role ARN (Amazon Resource Name) provides the necessary permissions to access Amazon Q services.
The tool needs an IAM role with the correct policies to retrieve data from the configured knowledge base.
AWS Region (Required)
Specifies the AWS region where the Amazon Q instance is hosted.
Example: us-east-1 (default region).
Selecting the correct region ensures the tool connects to the appropriate AWS services.
Max Results (Required)
Defines the maximum number of results the retriever should return for a given query.
Limiting the results optimizes performance and prevents excessive data retrieval.
Query Text
This is where users enter their search query or prompt.
The tool processes this input and retrieves the most relevant documents or answers from Amazon Q.
Test
It is designed to validate the configuration and connectivity of the Amazon Q Retriever Tool before saving and executing queries.
Last updated