Your code. Your data. Your AI. Eldric Client is a complete AI development environment that never sends your data to the cloud. Train custom models, build knowledge bases, and automate your entire workflow — all locally.

Why Train Your Own AI?

Generic AI models don't understand your codebase, your architecture decisions, or your team's conventions. With Eldric Client, you can create an AI that truly knows your project.

Your Codebase as Training Data

Feed your entire repository, documentation, and coding standards to train a model that understands your specific patterns, naming conventions, and architectural decisions.

Domain-Specific Knowledge

Whether it's financial regulations, medical terminology, or proprietary algorithms — train models that speak your industry's language fluently.

Zero Data Leakage

Every training step happens on your machine. Your proprietary code, trade secrets, and customer data never touch external servers.

Continuous Improvement

As your project evolves, retrain your model. Keep your AI assistant current with the latest changes, deprecations, and new features.

Train Custom Models in 4 Steps

1

Prepare

Collect your code, docs, and examples

2

Configure

Set LoRA parameters and base model

3

Train

Run training with progress monitoring

4

Deploy

Merge adapter and start using

Training Your Own Model
$ eldric
# Eldric v3.0 — Let's train a custom model

You: /train create

Eldric: Starting training job configuration...

[Training] Configure your fine-tuning job:
  Base model: llama3.1:8b
  Method: QLoRA (4-bit quantized)
  Dataset: ./my-codebase-examples.jsonl

[Dataset] Analyzing 2,847 training examples...
  ✓ Code completions: 1,245
  ✓ Bug fixes: 834
  ✓ Documentation: 768

You: Start training with rank 32 and 3 epochs

[Training] Job started: job_abc123
  Progress: ████████████░░░░░░░░ 62%
  Loss: 0.847 → 0.234
  ETA: 47 minutes

Training complete! Model saved to: ~/.eldric/models/my-custom-llama

You: /model my-custom-llama
Switched to my-custom-llama (fine-tuned on your codebase)

You: How do we handle authentication in this project?

Eldric: Based on your codebase patterns, you use JWT with refresh 
tokens. The auth flow is in src/auth/middleware.ts. Here's how 
to add a new protected route following your conventions...

Model Merging: Combine the Best of Multiple Models

Why choose between models when you can merge them? Combine a coding specialist with a documentation expert, or blend reasoning capabilities with domain expertise.

                    MODEL MERGING WORKFLOW
    
    ┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
    │   Code Model    │     │   Docs Model    │     │  Domain Model   │
    │   (CodeLlama)   │     │   (Mistral)     │     │  (Your LoRA)    │
    └────────┬────────┘     └────────┬────────┘     └────────┬────────┘
             │                       │                       │
             └───────────────────────┼───────────────────────┘
                                     │
                           ┌─────────▼─────────┐
                           │   MERGE ENGINE    │
                           │                   │
                           │  • SLERP (smooth) │
                           │  • TIES (sparse)  │
                           │  • DARE (random)  │
                           │  • Linear blend   │
                           └─────────┬─────────┘
                                     │
                           ┌─────────▼─────────┐
                           │   YOUR CUSTOM     │
                           │   SUPER MODEL     │
                           │                   │
                           │  Best of all      │
                           │  three models!    │
                           └───────────────────┘
                
Model Merging
You: /merge create

Eldric: Let's create a merged model.

[Merge] Select models to merge:
  1. codellama:13b (coding specialist)
  2. mistral:7b (general reasoning)  
  3. my-domain-lora (your fine-tune)

[Merge] Choose merge strategy:
  → SLERP - Spherical interpolation (recommended)

[Merge] Set weights:
  codellama: 0.4
  mistral: 0.3  
  my-domain: 0.3

[Processing] Merging tensors...
  Progress: ████████████████████ 100%

Merge complete! 
  Output: my-merged-expert
  Size: 13.2 GB
  
# Your new model combines:
# - CodeLlama's coding ability
# - Mistral's reasoning
# - Your domain knowledge

RAG: Your Personal Knowledge Base

Retrieval-Augmented Generation

Not everything needs fine-tuning. For rapidly changing information, documentation, or large knowledge bases, RAG provides instant context injection without retraining.

Ingest Everything

PDF, Markdown, code files, HTML, DOCX, TXT — 20+ formats automatically chunked and embedded.

Semantic Search

Find relevant context by meaning, not just keywords. FAISS and Chroma backends for fast retrieval.

Always Current

Update your knowledge base anytime. No retraining needed — just re-index changed documents.

Building a Knowledge Base
You: /learn ./company-docs

Eldric: Indexing documents for RAG...

[Learn] Scanning directory...
  Found 342 documents (127 MB)

[Embed] Creating embeddings...
  Progress: ████████████████████ 100%
  Chunks: 8,456
  Vector DB: FAISS

Knowledge base ready!

You: What's our policy on API rate limiting?

Eldric: [RAG Search] Found 3 relevant documents

Based on your internal documentation (api-guidelines.md):

Your API rate limiting policy is:
- Free tier: 100 requests/minute
- Pro tier: 1000 requests/minute
- Enterprise: Custom limits

Implementation uses token bucket algorithm with Redis backend.
See src/middleware/rate-limiter.ts for the implementation.

[Source: docs/api-guidelines.md, lines 45-78]

Complete Privacy by Design

Air-Gap Ready

Works completely offline. Perfect for classified environments, secure facilities, or anywhere without internet.

  • No network required
  • No telemetry
  • No cloud dependencies

Data Sovereignty

Your training data, prompts, and outputs never leave your machine. Full compliance with data residency requirements.

  • GDPR compliant
  • HIPAA capable
  • SOC2 ready

Audit Trail

Complete logging of all AI interactions. Know exactly what your AI assistant did and why.

  • Session recording
  • Tool call logging
  • Export capabilities

All the Power You Need

40+ Tools

  • File: Read, Write, Edit, Glob, Grep
  • Shell: Bash execution
  • Web: Search, Fetch
  • DB: Connect, Query, Schema

13 Agents

  • General, Explorer, Coder
  • Runner, Planner, Searcher
  • Database, Network, Trainer
  • View all →

Training

  • LoRA / QLoRA fine-tuning
  • Dataset management
  • Progress monitoring
  • Checkpoint management

Interfaces

  • Terminal CLI (Linux, macOS)
  • Native macOS App
  • Session persistence
  • Export to Markdown

AI Backends

  • Ollama (local)
  • vLLM, llama.cpp
  • HuggingFace TGI
  • OpenAI-compatible APIs

Enterprise Backends

  • NVIDIA Triton
  • TensorFlow Serving
  • Custom backends
  • View all →

Reasoning

  • Chain-of-thought analysis
  • COCONUT latent reasoning
  • Confidence scoring
  • Reasoning visualization

MoE Support

  • Mixture-of-Experts models
  • Expert routing control
  • Utilization monitoring
  • Custom expert selection

Real Workflows, Real Results

Codebase Onboarding

New to a project? Train a model on the codebase, then ask it anything: "How does authentication work?", "Where are the API routes?", "What's the deployment process?"

Documentation Generator

Fine-tune on your existing docs and coding style. Generate consistent, accurate documentation that matches your team's conventions.

Code Review Assistant

Train on your code review history. Get suggestions that align with your team's standards and catch issues specific to your codebase.

Legacy Code Expert

Feed your legacy codebase to create an AI that understands that old Perl script or COBOL system nobody wants to touch.

Start Building Your AI Today

See how Eldric Client can transform your development workflow.

Request Demo