Core Systems

The foundational modules that power the Eldric client

CLI

CLI Interface Stable

Your Command Center

The main interactive terminal interface. Handles user input, command parsing, response streaming, and orchestrates all other modules. Supports both interactive mode and single-prompt execution for scripting.

Interactive Mode

Full conversation with history, auto-suggestions, and multi-line input

Streaming Responses

Real-time token streaming with Ctrl+C cancellation support

Command System

/model, /agent, /mcp, /export and 30+ built-in commands

Context Management

Automatic context building with RAG injection

Usage Examples

Terminal
# Interactive mode
$ eldric

# Single prompt (for scripting)
$ eldric "Explain this error: connection refused"

# With specific model
$ eldric -m llama3.1:70b "Review this code"

# Auto-approve tool execution
$ eldric --auto-approve "Run the tests and fix any failures"

# Pipe input for processing
$ cat error.log | eldric "Analyze these errors"
Key Commands
/model <name> - Switch AI model /agent <name> - Switch to specialized agent /history - Show conversation history /export <file> - Export session to markdown /clear - Clear conversation context
OC

Ollama Client Stable

LLM Communication Layer

HTTP client for communicating with Ollama's REST API. Handles model inference, streaming responses, model management, and multimodal inputs. Supports all Ollama-compatible endpoints.

Streaming Chat

Real-time token streaming with cancellation

Model Management

Pull, push, create, and list models

Vision Support

Image analysis with multimodal models

Tool Detection

Automatic tool-calling capability detection

Usage Examples

Terminal
# Test Ollama connection
eldric> /status
Connected to Ollama at http://localhost:11434
Model: llama3.1:8b (supports tool calling)
Context: 8192 tokens

# List available models
eldric> /models
Available models:
  llama3.1:8b      (4.7 GB)  ★ current
  llama3.1:70b     (40 GB)
  codellama:34b    (19 GB)
  llava:13b        (8.0 GB)  [vision]
  nomic-embed-text (274 MB)  [embeddings]

# Pull a new model
eldric> /pull deepseek-coder:6.7b
Pulling deepseek-coder:6.7b... 100%
Configuration
OLLAMA_HOST - Ollama server URL (default: http://localhost:11434) ELDRIC_MODEL - Default model to use ELDRIC_TIMEOUT - Request timeout in seconds
C

Configuration Stable

Persistent Settings

SQLite-based configuration management. Stores all user preferences, API endpoints, model settings, and feature flags. Supports hot-reloading and environment variable overrides.

SQLite Backend

Reliable storage in ~/.config/eldric/config.db

Environment Variables

Override any setting via ELDRIC_* vars

Hot Reload

Changes apply immediately without restart

Import/Export

Backup and restore configuration

Usage Examples

Terminal
# View current configuration
eldric> /config
Current configuration:
  ollama_host: http://localhost:11434
  model: llama3.1:8b
  auto_approve: false
  rag_enabled: true
  embedding_model: nomic-embed-text

# Set a configuration value
eldric> /config set model llama3.1:70b
Updated: model = llama3.1:70b

# Export configuration
eldric> /config export ~/eldric-backup.json
Configuration exported to ~/eldric-backup.json
Storage Locations
~/.config/eldric/config.db - Main configuration database ~/.config/eldric/mcp_servers.json - MCP server definitions ~/.config/eldric/history.txt - Command history
M

Model Manager Stable

Model Discovery & Selection

Discovers available models from Ollama, tracks model capabilities (vision, tool-calling, embeddings), and provides intelligent model selection based on task requirements.

Auto-Discovery

Automatically detects models from Ollama

Capability Detection

Identifies vision, embeddings, tool-calling

Model Templates

Correct prompt formatting per model family

Recommendations

Suggests models based on task type

Model Capabilities

Reference
Model Chat Vision Tools Embedding
llama3.1:8b--
llama3.2:3b--
llava:13b--
qwen2.5-coder--
deepseek-coder--
nomic-embed---
mxbai-embed---

Tool & Agent Systems

Execute actions and specialize behavior with constrained tool access

T

Tool System Stable

40+ Built-in Tools

The tool executor and parser handle AI tool calls. The parser extracts XML tool invocations from responses, and the executor safely runs them with user approval, sandboxing, and output capture.

File Operations

Read, Write, Edit, Glob, Grep

Shell Execution

Bash with timeout, sandboxing, streaming

Web & Network

WebSearch, WebFetch, Ping, DNS, Curl

Database

SQLite, PostgreSQL, MySQL connectors

Package Managers

npm, pip, cargo, brew, apt, dnf

Export Tools

Markdown, JSON, PDF, HTML export

Tool Execution Flow

Example
# AI requests tool execution
eldric> What files are in the src directory?

[AI] I'll list the files in src/

⚡ Tool: Glob
   Pattern: src/**/*
   Allow? [y/n/always]: y

Found 24 files:
  src/main.cpp
  src/cli.cpp
  src/config.cpp
  ...

# Auto-approve mode skips confirmation
$ eldric --auto-approve "Count lines in all Python files"
Tool Categories
File: Read, Write, Edit, Glob, Grep, NotebookEdit Shell: Bash, BashBackground, BashInterrupt Web: WebSearch, WebFetch, Curl Database: DBConnect, DBQuery, DBSchema, DBTables Network: Ping, DNSLookup, PortScan, Traceroute Package: NpmInstall, PipInstall, CargoAdd, BrewInstall RAG: Learn, Forget, Remember, SearchKnowledge Export: ExportMarkdown, ExportJSON, ExportPDF
A

Agent System Stable

Specialized AI Personas

13 specialized agents with constrained tool access for safety, efficiency, and predictability. Each agent is optimized for specific tasks like coding, exploring, or database operations.

General Agent

All 40+ tools, full capability

Coder Agent

Read, Write, Edit, Glob - no execution

Explorer Agent

Read-only: Glob, Grep, Read

Database Agent

SQL tools only: Connect, Query, Schema

Usage Examples

Terminal
# List available agents
eldric> /agents
Available agents:
  general     - Full access (40+ tools)
  explorer    - Read-only navigation (Glob, Grep, Read)
  coder       - Code editing (Read, Write, Edit, Glob)
  runner      - Shell execution (Bash, Read)
  planner     - Architecture (Glob, Grep, Read) - no execution
  searcher    - Web research (WebSearch, WebFetch)
  database    - SQL operations (DBConnect, DBQuery, DBSchema)
  learner     - RAG system (Learn, Remember, Forget)
  network     - Diagnostics (Ping, DNS, Curl, PortScan)
  trainer     - LLM training (CreateDataset, StartTraining)
  merger      - Model merging (CreateRecipe, RunMerge)
  orchestrator- Cluster management (ClusterStatus, Deploy)

# Switch to coder agent
eldric> /agent coder
Switched to Coder agent (4 tools available)

# Agent can only use its allowed tools
eldric [coder]> Run the tests
[AI] I cannot execute shell commands in Coder mode.
Please switch to Runner or General agent.

Data & Knowledge

Persistent storage, embeddings, and retrieval-augmented generation

R

RAG Engine Stable

Retrieval-Augmented Generation

Learn from your documents, code, and data. The RAG engine chunks content, generates embeddings, stores them in a vector database, and automatically injects relevant context into AI prompts.

Multi-Format Support

PDF, DOCX, MD, code, HTML, JSON, XML, CSV

Smart Chunking

Configurable chunk size with overlap

Auto Context

Automatically adds relevant knowledge to prompts

Source Tracking

Know where each piece of knowledge came from

Usage Examples

Terminal
# Learn from a file
eldric> /learn ~/Documents/company-handbook.pdf
Learning from company-handbook.pdf...
  Extracted 45 pages of text
  Created 128 chunks
  Generated embeddings
  Added to knowledge base ✓

# Learn from a directory
eldric> /learn ~/projects/my-api --pattern "*.py"
Learning from 23 Python files...
  Created 312 chunks ✓

# Learn from a URL
eldric> /learn https://docs.example.com/api
Fetching and learning from URL... ✓

# Now ask questions - context is auto-injected
eldric> What is our vacation policy?
[AI uses RAG context from company-handbook.pdf]
According to the company handbook, employees receive...

# View learned sources
eldric> /sources
Knowledge base sources:
  company-handbook.pdf    (128 chunks)
  my-api/*.py            (312 chunks)
  docs.example.com       (45 chunks)

# Forget a source
eldric> /forget company-handbook.pdf
Removed 128 chunks from knowledge base ✓
Supported File Types
Documents: .pdf, .docx, .doc, .odt, .rtf, .epub Text: .txt, .md, .html, .htm Data: .json, .xml, .yaml, .yml, .csv Code: .py, .js, .ts, .cpp, .c, .h, .java, .go, .rs, .rb, .swift Spreadsheets: .xlsx, .xls, .ods
V

Vector Database Stable

Embeddings Storage

Stores and searches vector embeddings for semantic similarity. Supports multiple backends including SQLite (default), ChromaDB, and FAISS for high-performance workloads.

SQLite Backend

Zero-config, portable, ~50k vectors

ChromaDB Backend

HTTP-based, scalable, production-ready

FAISS Backend

GPU-accelerated, millions of vectors

Similarity Search

Cosine similarity with threshold filtering

Backend Comparison

Reference
Backend Setup Scale Speed Use Case
SQLiteZero-config~50k docsFastPersonal
ChromaDBHTTP server~1M docsFastTeam/Server
FAISSCompile opt10M+ docsVery FastEnterprise
# Configure backend
eldric> /config set vector_backend chroma
eldric> /config set vector_url http://localhost:8000/eldric
E

Embeddings Stable

Vector Generation

Generates vector embeddings from text using Ollama embedding models. These vectors enable semantic search, similarity matching, and RAG context retrieval.

Ollama Integration

Uses nomic-embed-text, mxbai-embed, etc.

Batch Processing

Efficient bulk embedding generation

Domain Embeddings

Specialized embeddings for code, legal, medical

Caching

Avoids re-computing identical embeddings

Embedding Models

Reference
Model Dimensions Size Best For
nomic-embed-text768274 MBGeneral text (default)
mxbai-embed-large1024670 MBHigher accuracy
all-minilm38445 MBLightweight/fast
snowflake-arctic1024670 MBCode & technical docs
# Configure embedding model
eldric> /config set embedding_model nomic-embed-text
S

Session Manager Stable

Conversation Persistence

Persists conversations to SQLite with full history including messages, tool executions, and file modifications. Supports session resume, export, and auto-documentation generation.

Full History

Messages, tools, file changes tracked

Session Resume

Continue conversations across restarts

Export Options

Markdown, JSON, HTML formats

Auto-Documentation

Generate TODO.md, DECISIONS.md from sessions

Usage Examples

Terminal
# List previous sessions
eldric> /sessions
Recent sessions:
  abc123  2024-01-15 14:30  "Refactoring auth module"  (42 messages)
  def456  2024-01-14 09:15  "Bug fix in payment flow"  (23 messages)
  ghi789  2024-01-13 16:45  "New feature: webhooks"    (67 messages)

# Resume a previous session
eldric> /resume abc123
Resumed session: Refactoring auth module
Context restored (42 messages)

# Export current session
eldric> /export session-report.md
Session exported to session-report.md

# Generate TODO from session
eldric> /generate-todo
Generated TODO.md with 8 action items
P

Prompt Database Stable

Reusable Prompt Library

Store, organize, and reuse prompts with categories, tags, and favorites. Track usage statistics and quickly access your most effective prompts.

Categories

Organize prompts by type (coding, writing, etc.)

Tags & Search

Find prompts quickly with tags and full-text search

Usage Tracking

See most-used and recently-used prompts

Import/Export

Share prompts as JSON or Markdown

Usage Examples

Terminal
# Save a prompt
eldric> /prompt save code-review
Enter prompt content (end with empty line):
Review this code for:
- Security vulnerabilities
- Performance issues
- Code style violations
Suggest improvements with examples.

Saved prompt: code-review (category: coding)

# Use a saved prompt
eldric> /prompt use code-review
[Prompt loaded, paste your code]

# List prompts
eldric> /prompts
Saved prompts:
  ★ code-review      (coding)    used 15 times
  ★ explain-error    (debugging) used 12 times
    write-tests      (testing)   used 8 times
    summarize-doc    (writing)   used 5 times

# Search prompts
eldric> /prompts search security
Found 2 prompts:
  code-review     - "...Security vulnerabilities..."
  security-audit  - "Perform security audit..."

AI Training & Customization

Train, merge, and personalize models with your own data

T

Training Service New

Fine-Tune Your Own Models

Train custom AI models with your data using LoRA, QLoRA, or full fine-tuning. Create datasets from your prompts, sessions, or RAG knowledge, then train locally or remotely.

LoRA Training

Lightweight adapters, minimal GPU required

QLoRA Training

4-bit quantized, runs on consumer GPUs

Multiple Backends

llama.cpp, Axolotl, Unsloth support

Dataset Creation

From prompts, sessions, or RAG sources

Training Workflow

Terminal
# Switch to trainer agent
eldric> /agent trainer

# Create dataset from your saved prompts
eldric [trainer]> Create a dataset from my prompts
Creating dataset from 45 prompts...
  Format: alpaca
  Output: ~/.config/eldric/datasets/my-prompts.jsonl
  Records: 45 ✓

# Start training job
eldric [trainer]> Train a LoRA adapter on llama3.1:8b using my-prompts dataset
Starting LoRA training job...
  Base model: llama3.1:8b
  Dataset: my-prompts (45 records)
  Backend: unsloth
  LoRA rank: 16
  
  Epoch 1/3: loss=2.4521 ████████░░ 80%

# List training jobs
eldric [trainer]> /training jobs
Training jobs:
  job-001  Running   llama3.1:8b-lora  Epoch 2/3  loss=1.823
  job-002  Completed codellama-custom  3 epochs   loss=0.412
Training Types
LoRA - Low-Rank Adaptation, ~100MB adapters, fast training QLoRA - Quantized LoRA, fits on 8GB VRAM Full - Full fine-tuning, best quality, needs 24GB+ VRAM Reasoning - Chain-of-thought training for better reasoning
M

Merge Service New

Combine Model Capabilities

Merge multiple models to combine their strengths. A coding model + creative writing model can produce a model that excels at both. Supports SLERP, TIES, DARE, and linear merging.

SLERP Merge

Smooth interpolation between models

TIES Merge

Task-specific, preserves unique strengths

DARE Merge

Drop And REscale for cleaner merges

Task Arithmetic

Add/subtract task vectors

Merge Workflow

Terminal
# Switch to merger agent
eldric> /agent merger

# Create a merge recipe
eldric [merger]> Merge codellama and llama3.1 using SLERP with 0.6 weight on codellama
Creating merge recipe...
  Method: SLERP (t=0.6)
  Models:
    - codellama:7b (weight: 0.6)
    - llama3.1:8b (weight: 0.4)
  Output: code-general-merged

# Run the merge
eldric [merger]> Run the merge
Running SLERP merge...
  Loading codellama:7b ████████████ 100%
  Loading llama3.1:8b  ████████████ 100%
  Merging layers       ████████░░░░ 67%

# List merged models
eldric [merger]> /merge recipes
Merge recipes:
  code-general-merged  SLERP     Completed  4.2GB
  creative-coder       TIES      Running    
  expert-blend         DARE      Pending
P

Personalization Beta

AI That Knows You

Three-level personalization: (1) User profiles adjust tone and style, (2) LoRA adapters learn from your feedback, (3) Full fine-tuning creates a model trained on your interactions.

Level 1: Profiles

Communication style, expertise, preferences

Level 2: Adapters

LoRA trained on your feedback

Level 3: Full Model

Custom model from your interactions

Feedback Tracking

👍/👎 and corrections improve responses

Personalization Levels

Example
# Set up user profile (Level 1 - no training)
eldric> /profile create
Creating user profile...
  Name: John
  Role: Senior Developer
  Style: Concise (vs. detailed)
  Expertise: Expert
  Tone: Professional
  
Profile saved. AI will adapt responses to your preferences.

# Give feedback to improve AI (for Level 2)
eldric> 👎 Too verbose, I prefer code-first answers
Feedback recorded. Use /train-adapter when you have 50+ interactions.

# Train personalization adapter (Level 2)
eldric> /train-adapter
Training personal adapter from 127 interactions...
  Using 89 positive interactions
  Training LoRA (rank=8)...
  Done! Adapter activated.

# Check personalization stats
eldric> /personalization stats
Personalization:
  Profile: Expert Developer (concise, professional)
  Interactions: 127 total
  Positive feedback: 89 (70%)
  Adapter: Active (trained on 89 samples)
  Top tools used: Bash, Edit, Grep
MM

Multimodal Stable

Beyond Text

Process images, analyze screenshots, and work with visual content. Uses vision models like LLaVA to understand and describe images in your conversations.

Image Analysis

Describe, analyze, and extract info from images

Screenshot Support

Analyze UI screenshots and error dialogs

Vision Models

LLaVA, BakLLaVA, and other multimodal models

Context Preservation

Images stored in session history

Usage Examples

Terminal
# Use a vision model
eldric> /model llava:13b

# Analyze an image
eldric> /image ~/screenshots/error.png What error is shown?
[Analyzing image...]
The screenshot shows a Python traceback with a KeyError
exception. The error occurs in file "api.py" line 45
when accessing dictionary key "user_id" which doesn't exist...

# Describe a UI mockup
eldric> /image ~/designs/mockup.png Describe this UI and suggest improvements
[Analyzing image...]
This is a login form with:
- Email and password fields
- "Remember me" checkbox
- Blue "Sign In" button

Suggestions:
1. Add password visibility toggle
2. Include "Forgot password" link
3. Consider adding social login options...

Integrations

Connect to external tools and services

M

MCP Client Stable

Model Context Protocol

Connect to external MCP servers that provide additional tools, resources, and prompts. Supports stdio-based servers for local tools and HTTP servers for remote services.

External Tools

Add tools from MCP servers to your toolkit

Resources

Access external data sources and files

Multi-Server

Connect to multiple MCP servers at once

JSON-RPC

Standard protocol for reliable communication

MCP Configuration

Terminal
# List configured MCP servers
eldric> /mcp servers
MCP servers:
  filesystem  ● Connected  (5 tools)
  github      ○ Disconnected
  database    ● Connected  (3 tools)

# Connect to a server
eldric> /mcp connect github
Connecting to github MCP server...
  Discovered 8 tools:
    - github_search_repos
    - github_create_issue
    - github_list_prs
    ...

# List all available MCP tools
eldric> /mcp tools
Available MCP tools:
  [filesystem] read_file, write_file, list_directory, ...
  [github] search_repos, create_issue, list_prs, ...
  [database] query, schema, tables

# MCP server configuration file
$ cat ~/.config/eldric/mcp_servers.json
{
  "servers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/user"],
      "enabled": true
    },
    "github": {
      "command": "npx", 
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {"GITHUB_TOKEN": "ghp_xxx"},
      "enabled": true
    }
  }
}
Popular MCP Servers
@modelcontextprotocol/server-filesystem - File operations @modelcontextprotocol/server-github - GitHub integration @modelcontextprotocol/server-postgres - PostgreSQL queries @modelcontextprotocol/server-slack - Slack messaging @modelcontextprotocol/server-puppeteer - Browser automation

Looking for GUI Workbenches?

The Eldric GUI Client offers visual workbenches for training, alignment, reasoning analysis, and more.

Explore GUI Client

Module Architecture

Eldric Client CLI Interface Config Manager Session Manager Prompt Database Core Engine Ollama Client Tool System Agent System Streaming Data & Knowledge Layer RAG Engine Vector Database Embeddings Client MCP Client DB Client AI Training & Customization Training Service Merge Service Personal- ization Multimodal Support GGUF Ollama LLM (llama3.2, codellama, qwen2.5, ...)