Agent Recipe

Agent recipes use agent prompts to build generative AI pipelines. Using agent recipes, you can create powerful AI agents that mix the thinking power of large language models (LLMs) with the ability to think, reason and act.

To create a new agent recipe, go to the Recipe Page, click Add New, select appropriate runtime option, provide a user-friendly name and detailed description, and choose Agent for the recipe type.

Agent

Drag and drop the agent tile onto the canvas and select an agent prompt from the available options. Each prompt encompasses pre-configured tools and settings essential for processing inquiries and responding with effectiveness.

Once you've selected the prompt, the canvas will reveal the tools and configurations integrated into the agent prompt. You can update the configurations for the preset tools. You cannot add or delete an agent tool from the recipe canvas. In order to edit tool types, or add or delete tools you need to edit the agent prompt in prompt playground. These tools empower the agent to analyze queries thoroughly and generate precise responses.

Output

By adding an output element into the recipe you can test the recipe and analyze the responses.

For configuring the details displayed on the output tile, refer to the Output section.

Link the Agent element to the Output element. This connection links the agent's input to the output mechanism, enabling the system to generate the desired response efficiently.

Save and publish recipe

Saving the recipe preserves all configurations and connections made in the workflow for future reference or deployment.

Once a recipe is created and saved, you need to publish it to assign it a version number.

Test recipe

A recipe can be tested by bringing and configuring the Output element into the recipe canvas (see Create Recipe).

Click the Test button to open a chat window, allowing interaction through queries. Submit your question review the response generated by the recipe. The response includes the following:

Answer

You can see the real-time response from your recipe pipeline that includes the answer to the question, prompt lens icon, a trace icon, and statistics. If the model selected in the prompt for the recipe supports streaming, you will see a streaming response.

Prompt Lens

Prompt lens let you peek behind the scenes as the request is being executed. Here, you can inspect the input we are sending to the language models (LLMs) - including system instructions, questions, and prompt. This empowers you to analyze the quality of your retrieved context from the vector store and make necessary adjustments to the context generation strategy if needed.

You can view the streaming in the prompt lens. Once the response in the prompt lens is completed, it auto-refreshes, and then the answer is displayed in the chat widget.

You can view the information in the prompt lens after the request is processed.

  • Agent scratch pad: The scratch pad aids in refining prompts, documenting interactions, or brainstorming ideas based on the outputs received from the selected models.

  • Agent response: The agent response refers to the output or action taken by the selected model in response to a user's prompt or query.

  • Tool response: It gives the insights or summaries related to the tools used, performance metrics, or operational status.

  • Trace:

    Trace has two sections as Prompt and Attributes.

    1. Prompt: You can view the traces of each operation executed during the processing . It includes the following:

      • Input

      • Output

    2. Attributes: These include various parameters and metrics associated with each request. Some of the attributes include:

      • Input Tokens

      • Completion tokens

      • Model parameters such as temperature, max tokens etc.

Statistics

You can view the following statistics when the response is generated after a test.

  • LLM Response Time: The amount of time in milliseconds taken by the LLM to generate complete response for the given prompt request.

  • LLM Request Timestamp: Represents the specific time a request was made to the Language Learning Model (LLM).

  • Time to First Token: The time that it takes for the model to produce the first token of the response after receiving the prompt. TTFT is particularly relevant for applications utilizing streaming, where providing immediate feedback is crucial.

  • Input Tokens: Total number of input tokens in the LLM request. This includes the prompt instructions, system prompt, context and user query.

  • Output Tokens: Total number of output tokens generated by the LLM in response to the prompt request. This number does not exceed the Max Tokens value setup during the prompt testing.

  • Input Unsafety Score: It measures the potential risk or danger associated with a given input. A higher score indicates a greater level of unsafety.

  • Input Toxicity Score: This score represents the likelihood that the input text could be perceived as toxic or harmful.

Export Recipe

To export the recipe and deploy copilot, please refer to the detailed instructions provided in Export Recipe section. This section includes step-by-step guidelines that will guide you through the entire process, ensuring accuracy and efficiency.

Copilots

To explore the various features and functionalities offered by Copilot, including its capabilities, settings, and customization options, please refer to the detailed section titled Copilots.

Last updated