Karini AI Documentation
Go Back to Karini AI
  • Introduction
  • Installation
  • Getting Started
  • Organization
  • User Management
    • User Invitations
    • Role Management
  • Model Hub
    • Embeddings Models
    • Large Language Models (LLMs)
  • Prompt Management
    • Prompt Templates
    • Create Prompt
    • Test Prompt
      • Test & Compare
      • Prompt Observability
      • Prompt Runs
    • Agentic Prompts
      • Create Agent Prompt
      • Test Agent Prompt
    • Prompt Task Types
    • Prompt Versions
  • Datasets
  • Recipes
    • QnA Recipe
      • Data Storage Connectors
      • Connector Credential Setup
      • Vector Stores
      • Create Recipe
      • Run Recipe
      • Test Recipe
      • Evaluate Recipe
      • Export Recipe
      • Recipe Runs
      • Recipe Actions
    • Agent Recipe
      • Agent Recipe Configuration
      • Set up Agentic Recipe
      • Test Agentic Recipe
      • Agentic Evaluation
    • Databricks Recipe
  • Copilots
  • Observability
  • Dashboard Overview
    • Statistical Overview
    • Cost & Usage Summary
      • Spend by LLM Endpoint
      • Spend by Generative AI Application
    • Model Endpoints & Datasets Distribution
    • Dataset Dashboard
    • Copilot Dashboard
    • Model Endpoints Dashboard
  • Catalog Schemas
    • Connectors
    • Catalog Schema Import and Publication Process
  • Prompt Optimization Experiments
    • Set up and execute experiment
    • Optimization Insights
  • Generative AI Workshop
    • Agentic RAG
    • Intelligent Document Processing
    • Generative BI Agentic Assistant
  • Release Notes
Powered by GitBook
On this page
  • Create Prompt Using a Prompt Template
  • Create New Prompt
  • Special Variables
  1. Prompt Management

Create Prompt

PreviousPrompt TemplatesNextTest Prompt

Last updated 10 months ago

Karini AI's prompt playground enables domain experts to become prompt engineers with a guided experience. You can develop high quality prompts, test them and track your prompt experimentation by saving the prompt runs.

There are multiple ways to create a prompt:

Create Prompt Using a Prompt Template

On prompt playground, click "Add new" to start new prompt creation. Click "Prompt templates" to access the available prompt templates. You can then select a template relevant to the task and continue to customize the prompt as required as described in the following section.

Create New Prompt

  1. On the Prompt Playground, click "Add new" to start new prompt creation.

  2. Provide a prompt name and select an appropriate task from the available list.

    • Agent

  3. Construct your prompt with appropriate instructions and variables.

  4. Variables can be added dynamically using "Add variable" button.

  5. You can select the "User Input" option if the variable will be used as an input by the user in interface.

  6. The constructed prompt - including the context and variables can be viewed on the right hand side Prompt panel.

  7. Once the prompt is created, it can be tested on the tab.

Special Variables

When authoring prompts, following variables are treated as special or predefined variables.

  1. Context: This is a predefined variable for LLM. For prompt testing, authors can provide appropriate text as a value for this variable. However, for production use or use within recipes and copilots, the value for this variable will be replaced with relevant context retrieved from the vector store. You can also upload a context file as a input to this variable. The context file must be of type .txt or .pdf. The contents from the pdf file are automatically preprocessed by doing OCR before adding them as a input to context. The limit to file size that can be added as a context is 2 MB.

  2. Question: This is a predefined variable for LLM. For prompt testing ,authors can provide the question related to the context as value for this variable. Enable the User input checkbox on the Edit prompt page to display the question under User Input in the Test & compare section.

  3. Evaluation Metric Name: This is a predefined variable for LLM used in Evaluation prompts. For prompt testing ,authors can provide the evaluation metric name as value for this variable.

  4. Evaluation Metric Description: This is a predefined variable for LLM used in Evaluation prompts. For prompt testing ,authors can provide the evaluation metric description as value for this variable.

  5. Evaluation Grading Criteria: This is a predefined variable for LLM used in Evaluation prompts. For prompt testing ,authors can provide the evaluation grading criteria as value for this variable.

  6. Evaluation Input: This is a predefined variable for LLM used in Evaluation prompts. For prompt testing ,authors can provide the question as as value for this variable.

  7. Evaluation Output: This is a predefined variable for LLM used in Evaluation prompts. For prompt testing ,authors can provide an answer to compare with the ground truth output for assessment.

  8. Evaluation Ground Truth: This is a predefined variable for LLM used in Evaluation prompts. For prompt testing ,authors can provide ground truth answer for assessment.

copilot
Test & Compare
Classification
Summarization
QnA
Evaluation