Karini AI Documentation
Go Back to Karini AI
  • Introduction
  • Installation
  • Getting Started
  • Organization
  • User Management
    • User Invitations
    • Role Management
  • Model Hub
    • Embeddings Models
    • Large Language Models (LLMs)
  • Prompt Management
    • Prompt Templates
    • Create Prompt
    • Test Prompt
      • Test & Compare
      • Prompt Observability
      • Prompt Runs
    • Agentic Prompts
      • Create Agent Prompt
      • Test Agent Prompt
    • Prompt Task Types
    • Prompt Versions
  • Datasets
  • Recipes
    • QnA Recipe
      • Data Storage Connectors
      • Connector Credential Setup
      • Vector Stores
      • Create Recipe
      • Run Recipe
      • Test Recipe
      • Evaluate Recipe
      • Export Recipe
      • Recipe Runs
      • Recipe Actions
    • Agent Recipe
      • Agent Recipe Configuration
      • Set up Agentic Recipe
      • Test Agentic Recipe
      • Agentic Evaluation
    • Databricks Recipe
  • Copilots
  • Observability
  • Dashboard Overview
    • Statistical Overview
    • Cost & Usage Summary
      • Spend by LLM Endpoint
      • Spend by Generative AI Application
    • Model Endpoints & Datasets Distribution
    • Dataset Dashboard
    • Copilot Dashboard
    • Model Endpoints Dashboard
  • Catalog Schemas
    • Connectors
    • Catalog Schema Import and Publication Process
  • Prompt Optimization Experiments
    • Set up and execute experiment
    • Optimization Insights
  • Generative AI Workshop
    • Agentic RAG
    • Intelligent Document Processing
    • Generative BI Agentic Assistant
  • Release Notes
Powered by GitBook
On this page
  • Save Prompt Run
  • Prompt Runs Actions
  1. Prompt Management
  2. Test Prompt

Prompt Runs

Karini AI provides the ability to track prompt experiments during the entire prompt engineering lifecycle. The saved prompt runs are available in the Prompt runs tab, facilitating easy access and management of your saved prompt runs.

Save Prompt Run

  • You can chose to save the prompt run with the default name, or provide an alternate name.

  • The saved prompt run captures the following details:

    • Task type

    • Prompt

    • Variables

    • Model & Model parameters

    • Model Response

    • Response Statistics

  • After saving the prompt run, the Primary and Fallback models will be displayed within the prompt run, each highlighted with a distinct colored background, as illustrated below.

Prompt Runs Actions

Prompt runs come with a set of actions which can be enabled by selecting the checkbox in-front of one or more prompt run record(s).

Following actions can be executed on a prompt run:

When a single prompt run is selected:

When a single prompt run is selected in the prompt runs table, the action button shows the following actions:

  1. Calculate projected cost: The interface allows you to estimate the cost of running a prompt based on specific input parameters.

    The key components of the interface are:

    • Requests: Select the request rate, which can be configured as either Requests per Minute or Requests per Hour.

    • Requests/Minute Value: Enter the value for the selected request rate, either Requests per Minute or Requests per Hour.

    • Uptime/Day (hrs): Enter the number of operational hours per day, representing the system's uptime per day in hours.

    • Type: The time period for cost estimation, which can be set to Monthly or Yearly.

The projected cost for the selected model (Anthropic Claude 3.5 Sonnet) is $11.23, visually represented in a bar chart.

The download arrow icon located in the top-right corner of the projected cost chart allows users to export the cost estimation data. By clicking this button, you can download the cost projection results, which can be used for further analysis, documentation, or reporting purposes.

The expand button on the left side allows users to enlarge the projected cost chart for better visibility, enabling detailed analysis of cost data, labels, and values. This enhances readability, especially for complex datasets or cost comparisons.

  1. Rollback: This action allows reverting the prompt to the selected prompt run replacing the current prompt with the prompt instructions, configurations and settings from the selected prompt run.

  2. Save as new prompt: This action enables saving the selected prompt run as a new prompt, preserving its configuration and settings for future use.

  3. Save as New prompt template: This action allows saving the selected prompt run as a new prompt template, which can be reused across different scenarios or projects.

  4. Export: This action facilitates exporting the details of the selected prompt run into a CSV file for further analysis.

When multiple prompt runs are selected:

When multiple prompt runs are selected in the prompt runs table, the action button shows the following actions:

  1. Calculate projected cost: This action allows for the calculation and comparison of projected costs across different models, enabling informed decision-making based on cost efficiency and performance.

    A maximum of five prompt runs can be selected to calculate the projected cost.

The image shows a cost projection for model usage, calculating monthly expenses based on requests per minute, uptime per day, and model pricing. The bar chart visualizes the projected costs for multiple models.

  1. Export : This action enables exporting the details of the selected prompt runs into a CSV file for further analysis.

PreviousPrompt ObservabilityNextAgentic Prompts

Last updated 2 months ago