Set up and execute experiment
To start an experiment, navigate to the Prompt Optimization Experiments section and click the Add new button.
Follow these steps to set up and execute a Prompt Optimization Experiment:
Step 1: Define the Experiment
Enter a descriptive name for your experiment in the Experiment Name field.
Provide a clear and concise description of the experiment’s objective in the Prompt Description field. This will guide the optimization process.
Step 2: Configure the Initial Prompt
Select an existing prompt from the Select Prompt dropdown.
Upload a CSV file that includes a field for each prompt input variable along with its corresponding ground truth answer. This dataset will be utilized to evaluate your prompt responses and optimize the prompt.
Click Show Dataset to preview the uploaded dataset to verify its formatting and ensure it aligns with the required structure before proceeding with prompt optimization.
Step 3: Specify any necessary improvements
It allows you to specify enhancements required for the prompt.
You can select one or more improvements from various predefined improvement suggestions to refine their prompt.
The available options include:
Refine for Clarity
Shorten for Conciseness
Add Specific Examples
Rephrase for Tone Consistency
Improve Structure
Make the Prompt More Verbose
Make the Prompt More Concise
You can customize and add specific improvements based on your requirements.
You can delete any added improvements using the delete button.
Step 4: Set Up the Evaluation Parameters
Choose a Judge LLM by selecting an appropriate model from the Model dropdown in the Judge LLM section.
Set maximum number of prompt optimization iterations to be performed for each candidate LLM.
Step 5: Add Candidate LLMs for Evaluation
Add the LLM endpoint(s) that will be tested for performance evaluation.
Step 6: Save the Experiment
Click "Save" to store the experiment setup for future reference or modifications.
The system allows you to save prompt optimization experiments at any stage, ensuring flexibility in the setup process.
Step 7: Execute the Experiment
Once all configurations are complete, click "Run Optimization" to start the prompt refinement process.
Upon selecting "Run Optimization," a confirmation pop-up will be displayed to verify the initiation of the optimization process.
Cloning a Prompt Optimization Experiment
Karini AI supports cloning a prompt optimization experiment. The clone functionality enables you to replicate an existing prompt optimization experiment, preserving all associated configurations, including the initial prompt, evaluation dataset, requested improvements, Judge LLM ,candidate LLM, model parameters, and iteration settings. This feature facilitates iterative experimentation, allowing you to adjust specific parameters and explore variations without modifying the original experiment.
How to Clone an Experiment
To start the cloning process, follow these steps:
Navigate to the experiment Dashboard -Locate the experiment you want to clone.
Click the "Clone" Button:-Found on the experiment details page.
Edit the New Experiment (Optional) -Once cloned, the new experiment retains all settings but can be modified independently.
Enter a name for the experiment.
Run the Experiment – Execute the cloned experiment with updated configurations as needed.
Last updated