Karini AI Documentation
Go Back to Karini AI
  • Introduction
  • Installation
  • Getting Started
  • Organization
  • User Management
    • User Invitations
    • Role Management
  • Model Hub
    • Embeddings Models
    • Large Language Models (LLMs)
  • Prompt Management
    • Prompt Templates
    • Create Prompt
    • Test Prompt
      • Test & Compare
      • Prompt Observability
      • Prompt Runs
    • Agentic Prompts
      • Create Agent Prompt
      • Test Agent Prompt
    • Prompt Task Types
    • Prompt Versions
  • Datasets
  • Recipes
    • QnA Recipe
      • Data Storage Connectors
      • Connector Credential Setup
      • Vector Stores
      • Create Recipe
      • Run Recipe
      • Test Recipe
      • Evaluate Recipe
      • Export Recipe
      • Recipe Runs
      • Recipe Actions
    • Agent Recipe
      • Agent Recipe Configuration
      • Set up Agentic Recipe
      • Test Agentic Recipe
      • Agentic Evaluation
    • Databricks Recipe
  • Copilots
  • Observability
  • Dashboard Overview
    • Statistical Overview
    • Cost & Usage Summary
      • Spend by LLM Endpoint
      • Spend by Generative AI Application
    • Model Endpoints & Datasets Distribution
    • Dataset Dashboard
    • Copilot Dashboard
    • Model Endpoints Dashboard
  • Catalog Schemas
    • Connectors
    • Catalog Schema Import and Publication Process
  • Prompt Optimization Experiments
    • Set up and execute experiment
    • Optimization Insights
  • Generative AI Workshop
    • Agentic RAG
    • Intelligent Document Processing
    • Generative BI Agentic Assistant
  • Release Notes
Powered by GitBook
On this page
  • Add New Model Endpoint
  • Review Model Endpoints
  • Available LLM Configurations
  1. Model Hub

Large Language Models (LLMs)

PreviousEmbeddings ModelsNextPrompt Management

Last updated 1 month ago

Karini AI supports integrations with the following Large Language Model providers and custom models. Using these models, users can create model endpoints in the Karini AI model hub.

  1. Amazon Bedrock

    In Amazon Bedrock, the "Model Serving" configuration provides two options for how your models are deployed and managed: On Demand and Provisioned Throughput.

    1. On Demand

      With the On Demand option, Amazon Bedrock automatically adjusts the computational resources to match the volume of requests your model receives. This means the system scales up or down in real-time based on demand, offering flexibility without the need for manual resource management. This option is suitable for workloads with varying traffic, ensuring you only pay for the resources used during active requests.

      The table below lists all available models.

    2. Provisioned Throughput

      The Provisioned Throughput option allows you to specify a fixed amount of computational capacity for your model, ensuring consistent performance and response times. This is ideal for use cases where you require a stable level of throughput, regardless of fluctuating demand. Resources are pre-allocated, which guarantees predictable performance but comes with a fixed cost, regardless of actual usage.

      When Provisioned Throughput is selected, the following model providers are available:

      For more details on model providers, please refer to the relevant documentation.

    Model ARN: Enter the ARN (Amazon Resource Name) of the selected model in this field. The ARN serves as a unique identifier for the model within the AWS ecosystem, ensuring proper configuration and secure linkage to your account.

  2. OpenAI

  3. Azure OpenAI

  4. Databricks

  5. Anyscale

  6. Amazon SageMaker

Add New Model Endpoint

To add a new model endpoint to the model hub, do the following:

  1. On the Model Endpoints menu, select Large language model endpoints(LLM) tab and click Add New.

  2. Select a model provider and associated model id in the list.

  3. User has option to override default configurations such as temperature, max tokens and pricing.

  4. By default, the organization level credentials are used to access the model. User can User can optionally overwrite credentials with a new set of model credentials.

  5. User can test the model endpoint request and response by using the Test endpoint button.

Review Model Endpoints

User can review the created model endpoints under Large language model endpoints(LLM) tab. It includes following information:

  1. Model provider and model id

  2. Max tokens, Min tokens and Temperature: The default values are displayed based on model specifications from the model provider. User has the ability to override them.

  3. Link to view the the recipes and prompts in which the model endpoint is used.

  4. Link to view the model information including the cost and usage dashboard for the model endpoint.

Available LLM Configurations

The following table describes LLMs that are available for integration with Karini AI model hub. It also includes links to model provider reference documentation offering detailed information on model specifications, usage instructions, and API endpoints for effective integration and utilization.

Provider
Models
Config Parameters
Reference

Amazon Bedrock

  1. Anthropic Claude 3.7 Sonnet vl

  2. Anthropic Claude 3.5 Sonnet v2

  3. Anthropic Claude 3.5 Haiku 20241022 vl

  4. DeepSeek RI vl

  5. Amazon Nova Pro vl

  6. .Amazon Nova Lite vl

  7. Amazon Nova Micro vl

  8. Anthropic Claude 3.5 Sonnet 20240620 vl

  9. Anthropic Claude 3 Opus 20240229 vl

  10. Anthropic Claude 3 Sonnet 20240229 vl

  11. Anthropic Claude 3 Haiku 20240307 vl

  12. Anthropic Claude v2.1

  13. Anthropic Claude v2

  14. Anthropic Claude Instant vl

  15. Llama 3.3 70B Instruct

  16. Llama 3.2 1B Instruct

  17. Llama 3.2 3B Instruct

  18. Llama 3.2 11B Vision Instruct

  19. Llama 3.2 90B Vision Instruct

  20. Meta Llama 3.1 8B Instruct

  21. Meta Llama 3.1 70B Instruct

  22. Meta Llama 3 8B Instruct

  23. Meta Llama 3 70B Instruct

  24. Mistral 7B Instruct

  25. Mistral Mixtral 8x7B Instruct

  26. Mistral Large (24.02)

  27. Mistral Small (24.02)

  28. Cohere Command R Plus

  29. Cohere Command R

  30. Amazon Titan Text Premier

  31. Amazon Titan Text Express

  32. Amazon Titan Text Lite

  33. A121 Jamba 1.5 Mini

  34. A121 Jamba 1.5 Large

  35. A121 Jamba Instruct

  1. Temperature

  2. Max Tokens

Azure OpenAI

  1. GPT 4O 2024-11-20

  2. GPT 4O Mini

  3. O3 Mini

  4. O1

  5. GPT 4O 2024-08-06

  6. GPT 4O

  7. GPT 3.5 Turbo (Legacy)

  8. GPT-4 (Legacy)

  1. Temperature

  2. Max Tokens

  3. Azure OpenAI API Base

  4. Azure OpenAI Deployment Name

OpenAI

  1. GPT 4O 2024-11-20

  2. GPT 4O Mini

  3. O3 Mini

  4. O1

  5. GPT 4O 2024-08-06

  6. GPT 4O

  7. Whisper-I

  8. TTS-I

  9. TTS-1-hd

  10. GPT-4-Turbo

  11. GPT-3.5-Turbo (Legacy)

  1. Temperature

  2. Max Tokens

Google Gemini

  1. Gemini 2.O Flash

  2. Gemini 2.O Flash-Lite Preview

  3. Gemini 1.5 Pro

  4. Gemini 1.5 Flash

  1. Temperature

  2. Max Token

Vertex Gemini

  1. Gemini 2.O Flash

  2. Gemini 2.O Flash-Lite Preview

  3. Gemini 2.O Flash Thinking

  4. Gemini 1.5 Pro

  5. Gemini 1.5 Flash

  1. Temperature

  2. Max Token

Fireworks

  1. Llama 4 Maverick Instruct (Basic)

  2. Llama 4 Scout Instruct (Basic)

  3. DeepSeek RI

  4. Deepseek V3-0324

  5. Llama V3P1 405B Instruct

  6. Llama V3P3 70B Instruct

  1. Temperature

  2. Max Token

Anyscale

  1. Google Gemma 7B

  2. Meta Llama 3 8B

  3. Meta Lama 3 70B

  4. Mistral 7B Instruct

  5. Mixtral 8x7B Instruct

  6. Mixtral 8x22B Instruct

  1. Temperature

  2. Max Tokens

Databricks

  1. Foundation Models

    1. Databricks DBRX Instruct

    2. Meta Lama 3 70B Instruct

    3. Mistral 8x7B Instruct

    4. Llama 2 70B Chat (Legacy)

  1. Databricks External Models

  1. Databricks Custom Models

  1. Temperature

  2. Max Tokens

  3. Endpoint URL: Databricks model Endpoint URL

Amazon SageMaker

  1. Temperature

  2. Max Tokens

  3. Model Endpoint Name: SageMaker model endpoint name

Cohere

  1. Rerank-english-v3.0

  2. Rerank-multilingual-v3.0

  3. Rerank-english-v2.0

  4. Rerank-multilingual-v2.0

Model Price: The default price displays public pricing of the model inference per 1000 input and output tokens. This price is used in Karini AI to calculate cost. User has the ability to override this price if needed - such as in case of special pricing agreement with the model provider.

Amazon
AI21 Labs
Anthropic
Cohere
Dashboards
https://docs.aws.amazon.com/bedrock/latest/userguide/titan-text-models.html
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal
https://platform.openai.com/docs/models
https://ai.google.dev/gemini-api/docs/models
https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models
https://fireworks.ai/models
https://docs.anyscale.com/endpoints/model-serving/get-started
https://docs.databricks.com/en/generative-ai/external-models/index.html
https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html
https://docs.cohere.com/v2/docs/reranking-with-cohere