Karini AI Documentation
Go Back to Karini AI
  • Introduction
  • Installation
  • Getting Started
  • Organization
  • User Management
    • User Invitations
    • Role Management
  • Model Hub
    • Embeddings Models
    • Large Language Models (LLMs)
  • Prompt Management
    • Prompt Templates
    • Create Prompt
    • Test Prompt
      • Test & Compare
      • Prompt Observability
      • Prompt Runs
    • Agentic Prompts
      • Create Agent Prompt
      • Test Agent Prompt
    • Prompt Task Types
    • Prompt Versions
  • Datasets
  • Recipes
    • QnA Recipe
      • Data Storage Connectors
      • Connector Credential Setup
      • Vector Stores
      • Create Recipe
      • Run Recipe
      • Test Recipe
      • Evaluate Recipe
      • Export Recipe
      • Recipe Runs
      • Recipe Actions
    • Agent Recipe
      • Agent Recipe Configuration
      • Set up Agentic Recipe
      • Test Agentic Recipe
      • Agentic Evaluation
    • Databricks Recipe
  • Copilots
  • Observability
  • Dashboard Overview
    • Statistical Overview
    • Cost & Usage Summary
      • Spend by LLM Endpoint
      • Spend by Generative AI Application
    • Model Endpoints & Datasets Distribution
    • Dataset Dashboard
    • Copilot Dashboard
    • Model Endpoints Dashboard
  • Catalog Schemas
    • Connectors
    • Catalog Schema Import and Publication Process
  • Prompt Optimization Experiments
    • Set up and execute experiment
    • Optimization Insights
  • Generative AI Workshop
    • Agentic RAG
    • Intelligent Document Processing
    • Generative BI Agentic Assistant
  • Release Notes
Powered by GitBook
On this page

Prompt Management

Karini AI provides comprehensive prompt management capabilities that empowers users to write, test, save, audit and manage prompts, prompt templates and prompt versions.

Karini AI’s prompt playground revolutionizes how prompts are created, tested, and perfected across their lifecycle. This user-friendly and dynamic platform transforms domain experts into skilled prompt masters, offering a guided experience with ready-to-use templates for kickstarting the prompt creation. You can quickly evaluate their prompts using different models and model parameters focusing on response quality, number of tokens, and response time to select the best option.

Using Karini’s Prompt Playground, authors can:

  • Author, Compare, and Test Prompts:

    • Experiment with prompts by adjusting the text, models, or model parameter.

    • Quickly compare the prompts against multiple authorized models for quality of responses, number of tokens, and response time to select the best prompt.

  • Save Prompt Run:

    • Capture and save the trial, including the prompt, selected models, settings, generated responses, and token count and response time metrics.

    • If a “best” response is chosen during testing, it’s marked for easy identification.

  • Analyze Prompt Run:

    • Review saved prompt runs to enhance and refine your work.

    • Evaluate and compare prompts for response quality and performance.

  • Time Travel:

    • Revert to a previous prompt version by rolling back to a historical prompt run.

    • Save a historical prompt run as a new prompt or prompt template for future experiments or to integrate into a recipe workflow.

  • Offline Analysis:

    • Download all prompt runs as a report for comprehensive offline analysis or to meet auditing requirements.

The following video demonstrates Karini AI's prompt management capabilities.

<<prompt video>>

PreviousLarge Language Models (LLMs)NextPrompt Templates

Last updated 1 month ago