Karini AI Documentation
Go Back to Karini AI
  • Introduction
  • Installation
  • Getting Started
  • Organization
  • User Management
    • User Invitations
    • Role Management
  • Model Hub
    • Embeddings Models
    • Large Language Models (LLMs)
  • Prompt Management
    • Prompt Templates
    • Create Prompt
    • Test Prompt
      • Test & Compare
      • Prompt Observability
      • Prompt Runs
    • Agentic Prompts
      • Create Agent Prompt
      • Test Agent Prompt
    • Prompt Task Types
    • Prompt Versions
  • Datasets
  • Recipes
    • QnA Recipe
      • Data Storage Connectors
      • Connector Credential Setup
      • Vector Stores
      • Create Recipe
      • Run Recipe
      • Test Recipe
      • Evaluate Recipe
      • Export Recipe
      • Recipe Runs
      • Recipe Actions
    • Agent Recipe
      • Agent Recipe Configuration
      • Set up Agentic Recipe
      • Test Agentic Recipe
      • Agentic Evaluation
    • Databricks Recipe
  • Copilots
  • Observability
  • Dashboard Overview
    • Statistical Overview
    • Cost & Usage Summary
      • Spend by LLM Endpoint
      • Spend by Generative AI Application
    • Model Endpoints & Datasets Distribution
    • Dataset Dashboard
    • Copilot Dashboard
    • Model Endpoints Dashboard
  • Catalog Schemas
    • Connectors
    • Catalog Schema Import and Publication Process
  • Prompt Optimization Experiments
    • Set up and execute experiment
    • Optimization Insights
  • Generative AI Workshop
    • Agentic RAG
    • Intelligent Document Processing
    • Generative BI Agentic Assistant
  • Release Notes
Powered by GitBook
On this page
  • Step 1: Define the Experiment
  • Step 2: Configure the Initial Prompt
  • Step 3: Specify any necessary improvements
  • Step 4: Set Up the Evaluation Parameters
  • Step 5: Add Candidate LLMs for Evaluation
  • Step 6: Save the Experiment
  • Step 7: Execute the Experiment
  • Cloning a Prompt Optimization Experiment
  1. Prompt Optimization Experiments

Set up and execute experiment

To start an experiment, navigate to the Prompt Optimization Experiments section and click the Add new button.

Follow these steps to set up and execute a Prompt Optimization Experiment:

Step 1: Define the Experiment

  • Enter a descriptive name for your experiment in the Experiment Name field.

  • Provide a clear and concise description of the experiment’s objective in the Prompt Description field. This will guide the optimization process.

Step 2: Configure the Initial Prompt

  • Select an existing prompt from the Select Prompt dropdown.

  • Upload a CSV file that includes a field for each prompt input variable along with its corresponding ground truth answer. This dataset will be utilized to evaluate your prompt responses and optimize the prompt.

  • Click Show Dataset to preview the uploaded dataset to verify its formatting and ensure it aligns with the required structure before proceeding with prompt optimization.

Step 3: Specify any necessary improvements

  • It allows you to specify enhancements required for the prompt.

  • You can select one or more improvements from various predefined improvement suggestions to refine their prompt.

  • The available options include:

    • Refine for Clarity

    • Shorten for Conciseness

    • Add Specific Examples

    • Rephrase for Tone Consistency

    • Improve Structure

    • Make the Prompt More Verbose

    • Make the Prompt More Concise

  • You can customize and add specific improvements based on your requirements.

  • You can delete any added improvements using the delete button.

Step 4: Set Up the Evaluation Parameters

  • Choose a Judge LLM by selecting an appropriate model from the Model dropdown in the Judge LLM section.

  • Set maximum number of prompt optimization iterations to be performed for each candidate LLM.

Step 5: Add Candidate LLMs for Evaluation

  • Add the LLM endpoint(s) that will be tested for performance evaluation.

Step 6: Save the Experiment

  • Click "Save" to store the experiment setup for future reference or modifications.

  • The system allows you to save prompt optimization experiments at any stage, ensuring flexibility in the setup process.

Step 7: Execute the Experiment

  • Once all configurations are complete, click "Run Optimization" to start the prompt refinement process.

  • Upon selecting "Run Optimization," a confirmation pop-up will be displayed to verify the initiation of the optimization process.

Cloning a Prompt Optimization Experiment

Karini AI supports cloning a prompt optimization experiment. The clone functionality enables you to replicate an existing prompt optimization experiment, preserving all associated configurations, including the initial prompt, evaluation dataset, requested improvements, Judge LLM ,candidate LLM, model parameters, and iteration settings. This feature facilitates iterative experimentation, allowing you to adjust specific parameters and explore variations without modifying the original experiment.

How to Clone an Experiment

To start the cloning process, follow these steps:

  1. Navigate to the experiment Dashboard -Locate the experiment you want to clone.

  2. Click the "Clone" Button:-Found on the experiment details page.

  3. Edit the New Experiment (Optional) -Once cloned, the new experiment retains all settings but can be modified independently.

  4. Enter a name for the experiment.

  5. Run the Experiment – Execute the cloned experiment with updated configurations as needed.

PreviousPrompt Optimization ExperimentsNextOptimization Insights

Last updated 2 months ago