Skip to main content

BYOLLM (Bring Your Own LLM)

BYOLLM (Bring Your Own LLM) allows you to connect your own Large Language Model provider with CrawlDesk.

Instead of relying only on the platform’s managed AI configuration, BYOLLM enables developers to configure their own provider, model, and API key.

This provides teams with greater control over:

  • AI provider selection
  • Model configuration
  • API usage and billing
  • AI infrastructure policies

BYOLLM is configured at the project level, meaning each project can define its own AI provider and model.

Configuration is available in: Project Settings → AI Config

Once configured, CrawlDesk will use the selected provider for AI powered docs search.

Business Use Cases

  • Use Your Existing AI Provider: Connect the same AI provider your team already uses and keep AI usage within your existing infrastructure.
  • Control AI Costs: AI usage is billed directly by your provider, allowing you to monitor and manage costs using the provider’s billing and analytics tools.
  • Choose Specific Models: Different workloads require different models. BYOLLM allows developers to select models optimized for speed, reasoning, or cost efficiency.
  • Enterprise Compliance: Organizations that require strict control over external services can manage AI access using their own provider credentials and internal policies.

Integration Guide

To configure your own AI provider using BYOLLM:

  1. Open Project Settings
  2. Navigate to AI Config
  3. Select the AI Provider
  4. Choose the model
  5. Enter your API key
  6. Click Save AI Config

After saving the configuration, CrawlDesk will begin using the selected provider for AI-powered processing.

Supported Providers

CrawlDesk currently supports the following providers for BYOLLM.

ProviderStatus
GeminiSupported
OpenAISupported

If you require integration with a specific AI provider based on your use case, you can contact the CrawlDesk team for support.

The team can assist with evaluating and enabling additional provider integrations where needed.

Best Practices

  • Choose fast models for large crawling pipelines
  • Monitor AI usage through your provider dashboard
  • Test configuration with small crawls before running large jobs
  • Keep API keys secure and rotate them periodically
  • Use Default AI when prototyping workflows
  • Standardize provider and model configuration across projects