Karini AI Documentation
Go Back to Karini AI
  • Introduction
  • Installation
  • Getting Started
  • Organization
  • User Management
    • User Invitations
    • Role Management
  • Model Hub
    • Embeddings Models
    • Large Language Models (LLMs)
  • Prompt Management
    • Prompt Templates
    • Create Prompt
    • Test Prompt
      • Test & Compare
      • Prompt Observability
      • Prompt Runs
    • Agentic Prompts
      • Create Agent Prompt
      • Test Agent Prompt
    • Prompt Task Types
    • Prompt Versions
  • Datasets
  • Recipes
    • QnA Recipe
      • Data Storage Connectors
      • Connector Credential Setup
      • Vector Stores
      • Create Recipe
      • Run Recipe
      • Test Recipe
      • Evaluate Recipe
      • Export Recipe
      • Recipe Runs
      • Recipe Actions
    • Agent Recipe
      • Agent Recipe Configuration
      • Set up Agentic Recipe
      • Test Agentic Recipe
      • Agentic Evaluation
    • Databricks Recipe
  • Copilots
  • Observability
  • Dashboard Overview
    • Statistical Overview
    • Cost & Usage Summary
      • Spend by LLM Endpoint
      • Spend by Generative AI Application
    • Model Endpoints & Datasets Distribution
    • Dataset Dashboard
    • Copilot Dashboard
    • Model Endpoints Dashboard
  • Catalog Schemas
    • Connectors
    • Catalog Schema Import and Publication Process
  • Prompt Optimization Experiments
    • Set up and execute experiment
    • Optimization Insights
  • Generative AI Workshop
    • Agentic RAG
    • Intelligent Document Processing
    • Generative BI Agentic Assistant
  • Release Notes
Powered by GitBook
On this page
  • Organization Information
  • Credentials
  • Setup AWS Credentials
  • AWS account ID:
  • AWS Credentials:
  • Model Provider Credentials
  • OpenAI Credentials:
  • Azure OpenAI Credentials:
  • Vertex Gemini Credentials:
  • Google Gemini credentials
  • Fireworks credentials
  • Anyscale:
  • Reranker LLM Credentials
  • Data connector Provider Credentials
  • Azure Cloud Credentials
  • Confluence Credentials
  • Google Cloud Services Credentials
  • Box Credentials
  • Google Drive Credentials
  • Dropbox Credentials
  • Sharepoint Credentials
  • Databricks Runtime Credentials
  • Databricks Credentials
  • Global Default Model Endpoints
  • Intent Detector LLM
  • Followup Questions Generator
  • Natural Language Assistant
  • Global VLM Model
  • Global Embeddings Model
  • Default Guardrail
  • Speech to Text
  • Text to Speech
  • Custom metadata extraction model
  • Reranker Model Endpoint
  • ACL Groups Configuration

Organization

PreviousGetting StartedNextUser Management

Last updated 6 days ago

User can setup organization level configurations for usage across all the resources within an organization. The following is a comprehensive list of configurations for an organization.

Organization Information

  • Organization Name: Enter the desired name for your organization.

  • Business Name: Specify the name of your business.

  • Domain: Provide the domain associated with your organization.

  • Type: Karini or Legal (Deprecated)

Credentials

All credentials are encrypted and secured in Karini AI's vault.

Setup AWS Credentials

AWS account ID:

Add your 12 digit valid one or more AWS account IDs. You can link more than one AWS account

In your AWS Account, create a Cross Account IAM role with external ID. Locate your Karini AI Organization ID visible in right hand top corner when you are in Edit Organization page. You must use your Karini AI Organization ID as the external ID in your IAM role. Follow for details to create as IAM role.

Permission Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "textract:DetectDocumentText",
                "textract:AnalyzeDocument",
                "textract:StartDocumentTextDetection",
                "textract:StartDocumentAnalysis",
                "textract:GetDocumentTextDetection",
                "textract:GetDocumentAnalysis"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "comprehend:DetectToxicContent",
                "comprehend:DetectEntities",
                "comprehend:DetectKeyPhrases",
                "comprehend:DetectSentiment",
                "comprehend:DetectSyntax",
                "comprehend:DetectDominantLanguage",
                "comprehend:ClassifyDocument",
                "comprehend:DetectPiiEntities",
                "comprehend:ContainsPiiEntities",
                "comprehend:DescribeDocumentClassifier",
                "comprehend:ListDocumentClassifiers",
                "comprehend:DescribeDocumentClassificationJob",
                "comprehend:ListDocumentClassificationJobs"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "sagemaker:InvokeEndpoint"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "bedrock:InvokeModel",
                "bedrock:InvokeModelWithResponseStream",
                "bedrock:CreateGuardrail",
                "bedrock:UpdateGuardrail",
                "bedrock:DeleteGuardrail",
                "bedrock:GetGuardrail",
                "bedrock:ListGuardrails",
                "bedrock:ApplyGuardrail",
                "bedrock:CreateGuardrailVersion",
                "bedrock:ListKnowledgeBases",
                "bedrock:ListKnowledgeBaseDocuments",
                "bedrock:AssociateAgentKnowledgeBase",
                "bedrock:Retrieve"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:CreateBucket",
                "s3:DeleteObject",
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutBucketCORS",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::*",
                "arn:aws:s3:::*/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "glue:GetDatabase",
                "glue:GetDatabases",
                "glue:GetTable",
                "glue:GetTables"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "athena:StartQueryExecution",
                "athena:GetQueryExecution",
                "athena:GetQueryResults"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ses:SendEmail",
                "ses:SendRawEmail"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "sqs:SendMessage"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "lambda:InvokeFunction"
            ],
            "Resource": "*"
        },
        {
            "Action": [
                "neptune-db:CancelQuery",
                "neptune-db:Connect",
                "neptune-db:CancelLoaderJob",
                "neptune-db:GetLoaderJobStatus",
                "neptune-db:DeleteDataViaQuery",
                "neptune-db:DeleteStatistics",
                "neptune-db:GetEngineStatus",
                "neptune-db:GetGraphSummary",
                "neptune-db:GetQueryStatus",
                "neptune-db:ReadDataViaQuery",
                "neptune-db:WriteDataViaQuery",
                "neptune-db:GetStatisticsStatus",
                "neptune-db:ListLoaderJobs",
                "neptune-db:StartLoaderJob",
                "neptune-db:GetStreamRecords"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Action": [
                "es:ESHttpGet",
                "es:ESHttpPut",
                "es:ESHttpPost",
                "es:ESHttpHead"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}

Trust Policy:

Attach a trust policy to allow Karini AI principal to assume role under the predefined conditions for your external Id. Following is a sample trust policy.

Note: Contact Karini AI for your trust policy configuration.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111111111111:role/karini-role"
                ]
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "sts:ExternalId": [
                        "1111c0502c111a9b7b5f1111"
                    ]
                }
            }
        }
    ]
}

AWS Credentials:

Add a default Global role used to access resources in your AWS account. The role can be overriden in respective model hub or connector pages in case you have more restrictive role.

  • AWS default region: Select your AWS region from the dropdown

Model Provider Credentials

OpenAI Credentials:

  • OpenAI Key: OpenAI API key to access the registered OpenAI models.

Azure OpenAI Credentials:

  • Azure OpenAI Key: Azure OpenAI API key to access the registered Azure OpenAI models.

Vertex Gemini Credentials:

  • Vertex Gemini JSON Credentials: This JSON file contains the credentials for a Google Cloud service account, enabling secure, server-to-server authentication with Google APIs without user interaction.

Google Gemini credentials

  • Google Gemini key: An authentication key used to securely access and interact with Google Gemini's AI models and services.

Fireworks credentials

  • Fireworks Key: An unique authentication key required to securely access and interact with the Fireworks models and services.

Anyscale:

  • Anyscale API Key: The unique authentication token required to access Anyscale services and resources.

  • Anyscale API Base: The endpoint where Anyscale services are hosted, facilitating communication between client applications and the Anyscale platform.

Reranker LLM Credentials

  • Cohere API Key: A unique authentication key provided by Cohere, used to access the cohere reranker model during embeddings retrieval process.

Data connector Provider Credentials

Azure Cloud Credentials

  • Azure Account Name: The unique identifier associated with your Azure account.

  • Azure Account Key: A secret authentication key required to access Azure services and resources securely.

Confluence Credentials

  • Confluence Account Name: Username or account identifier associated with the Confluence account.

  • Confluence Key: The unique identifier or token assigned to the Confluence account for authentication purposes.

  • Confluence Product URL: The web address or URL of the Confluence product where the account is hosted, used for accessing Confluence services.

Google Cloud Services Credentials

  • Credentials Json: Json credentials refer to authentication information stored in JSON format, typically containing essential details such as client ID, client secret, and other necessary credentials required for authenticating access to a service or platform.

Box Credentials

  • Box Credentials: Paste credentials JSON containing "client_id", "client_secret", "access_token", "refresh_token".

Google Drive Credentials

  • Credentials JSON: JSON credentials refer to authentication information stored in JSON format, typically containing essential details such as client ID, client secret, and other necessary credentials required for authenticating access to a service or platform.

Dropbox Credentials

  • Dropbox Token: Enter token, refesh token, client id and client secret authentication code allowing secure access to Dropbox accounts.

Sharepoint Credentials

  • Sharepoint Client ID: A unique identifier assigned to your Sharepoint application for authentication in the system.

  • Sharepoint Client Secret: A secure key associated with your Sharepoint application used alongside the Client ID to authenticate API requests.

  • Sharepoint Tenant ID: A unique identifier for your organization's Sharepoint instance, used to specify the tenant during authentication.

Databricks Runtime Credentials

Databricks Credentials

Credentials for Databricks Workspace.

  • Databricks Host URL: The URL of your Databricks workspace

  • Databricks API Token: The unique identifier that grants access to Databricks API endpoints. It's used for authentication when making requests to the Databricks API.

  • Databricks HTTP Path: Connection details for Databricks SQL Warehouse.

  • Databricks Cluster ID [optional]: The unique identifier assigned to your Databricks cluster. It allows you to specify which cluster your job or task should run on within your Databricks workspace.

Global Default Model Endpoints

Intent Detector LLM

LLM that detects intent in chatbots to classify them as specific or not specific. The prompt for intent detection can be mofied on recipe's output tile

  • Intent Detector LLM endpoint: Select LLM model endpoint from the registered model endpoints list in the UI.

Followup Questions Generator

LLM to generate follow-up questions in chatbot based on conversation history.

  • Followup questions generator model endpoint: Select LLM model endpoint from the registered model endpoints list in the UI.

Natural Language Assistant

LLM to assist in a variety of natural language processing tasks, including text and JSON schema generation, as well as query rewriting and expansion.

  • Natural Language Assistant model endpoint: Select LLM endpoint from the registered model endpoints list in the UI.

Global VLM Model

LLM to extract and recognize text from images or documents, improving the accuracy of visual data processing using Optical Character Recognition (OCR) technology.

  • Global VLM model endpoint: Select LLM endpoint from the registered model endpoints list in the UI.

Global Embeddings Model

The LLM is used to generate the Catalog Vector index. Embedding LLMs with dimension of 1536 are supported.

  • Global Embeddings model endpoint: Select Global Embeddings model endpoint from the registered embedding model endpoints list in the UI

Default Guardrail

Karin AI supports Amazon Bedrock guardrails, The default guardrails is used as default for supported models and model providers. User can override the guardrails with more specific guardrail endpoint

  • Global Embeddings model endpoint: Select the appropriate guardrails from the list

Speech to Text

Karini AI supports Audio mode for chatbots that requires Speech to Text model. Ensure model endpoint is added to Karini AI model hub

  • Speech to Text model endpoint: Select the one of the available speech to text model endpoint

Text to Speech

Karini AI supports Audio mode for chatbots that requires Text to Speech model. Ensure model endpoint is added to Karini AI model hub

  • Text to Speech model endpoint: Select the one of the available text to speech model endpoint

Custom metadata extraction model

Metadata extraction model used for custom metadata extraction prompt

  • Custom metadata extraction model endpoint: Select a model endpoint from the registered model endpoints list in the UI.

Reranker Model Endpoint

Reranker model used to re-rank the search results based on the user query.

  • Select reranker model endpoint: Select a reranker model endpoint from the available model endpoints in the UI.

ACL Groups Configuration

Administrators can configure Access Control Lists (ACLs) for various applications and services within the organization to enable user access to data. The dropdown menus allow selection of platforms like Amazon S3, Box, Dropbox, Google drive etc., enabling administrators to assign access to specific identity provider groups. This ensures secure access to resources and maintains governance.

  • Identity Provider Group ID: This field likely associates a specific identity provider group (e.g., AWS, Google Cloud) with the ACL configuration.

  • Application: A dropdown menu for selecting the application or platform, like Amazon S3, from a predefined list (Amazon S3, Box, Google drive, etc.).

  • Application Group ID (Optional): This optional field could be used to provide a specific ID for a group within the selected application, for example, a sub-group within Amazon S3.

  • Application Group Name: A field for specifying the name of the group, enabling easier identification and management of ACLs.

Attach an inline policy with permissions to selective services, resources and actions. Refer to for security best practices. Following is a sample policy template for Karini AI to access respective resources in your AWS account. However, you can further restrict the policy as per your needs.

AWS IAM Role ARN: This is a unique identifier for an AWS resource. Provide AWS IAM Role ARN, refer for more details.

AWS External ID: This is a unique identifier that third parties use when assuming roles in your account. This is read only field and set to your Karini AI organization ID refer for more details.

Databricks AWS IAM Instance Role: The EC2 instance role that grants permission to AWS services with sts:assumerole policy. This role is used by Databricks to launch the job cluster. For details about creating this role, refer to .

AWS documentation
AWS least-privilege permission
IAM Roles Overview
Using External ID
Databricks prerequisites