Absortio

Email → Summary → Bookmark → Email

GitHub - IamLumae/Project-Lutum-Veritas: This is Veritas Research

Extracto

This is Veritas Research. Contribute to IamLumae/Project-Lutum-Veritas development by creating an account on GitHub.

Resumen

Resumen Principal

Lutum Veritas es un motor de investigación profunda de código abierto que se presenta como una alternativa radicalmente superior a las soluciones de IA de grandes corporaciones como Perplexity, OpenAI y Google. Desarrollado por Martin Gehrken con la filosofía de que "la búsqueda de una verdad nunca puede valer más que la búsqueda de cuestionarla", ofrece conocimiento verificable y exhaustivo

Contenido

LV Research Logo

Lutum Veritas

Open Source Deep Research Engine
"The search for a truth can never be worth more than the search to question it."

FeaturesInstallationQuick StartHow It WorksTech StackLicense

License: AGPL-3.0 Version Platform Python


Perplexity, OpenAI and Google deliver summaries. I wanted truth.

So I stopped waiting and built it myself. The Camoufox scraper cuts through Cloudflare, Bloomberg and paywalls with 0% detection. The recursive pipeline passes context forward – each research point knows what the previous ones discovered. Claim Audits force the model into self-reflection instead of blind assertions.

The result: 203,000 characters of academic depth for a single query. Cost: under 20 cents. That's orders of magnitude cheaper than OpenAI o3 and qualitatively in a different league.

This isn't an "alternative" to existing tools. This is proof that a solo dev with the right architecture can beat billion-dollar corporations at what should be their core competency: deep, verifiable knowledge.

The bar for Deep Research is set right here.

— Martin Gehrken, January 30, 2026


Demo

Lutum Veritas Demo - Deep Research in Action

Full Deep Research workflow: Query → Clarification → Plan → Research → Final Report


🔬 Benchmark Results

Independent comparison of Lutum Veritas vs. ChatGPT Deep Research vs. Perplexity Pro vs. Gemini Advanced:

📊 View Full Benchmark Report (German) 🌐 Auto-Translated Version (EN) (Google Translate)

  • 4-way comparison with identical scientific query
  • Quantitative metrics: output length, sources, costs, duration
  • Quality review by competing AIs
  • 16-agent fact-check protocol for hallucinations

TL;DR: Lutum delivered 103k characters with 90 sources for $0.19. ChatGPT: 12k chars, 25 sources, fabricated citations. Gemini: 24k chars, data minimization detected. Perplexity: 21k chars, $20/mo subscription.


🏆 Thank You, Community!

Veritas Research - Research Without Permission

The First 4 Days

Released: January 30, 2026 at 09:00 Current: February 3, 2026 at 10:00

What you've accomplished:

GitHub Stars Forks

📊 Traffic Stats (First 4 Days):

  • 46 Stars (+24 in 24h, +109%)
  • 🍴 6 Forks (+3 in 24h, +100%)
  • 🔥 390 Clones (+101 in 24h, +35%)
  • 👀 1,016 Views (+673 in 24h, +196%)
  • 👥 630 Unique Visitors (+490 in 24h, +350%)
  • 🎯 7.3% Conversion Rate (Stars/Visitors - Industry avg: 1-3%)
  • 🌍 Featured on: Hacker News, ComputerBase.de, Hardwareluxx, Product Hunt, DeepLearning.AI Community

You Made This Possible

In just 3 days, you've helped prove something important:

A solo developer with an idea can stand toe-to-toe with billion-dollar companies. You don't need permission to build something great. You need passion, code, and a community that believes.

Every star, every clone, every "this is exactly what I needed" message keeps this project going.

You're not just users. You're proof that Research Without Permission isn't just a tagline - it's a movement.

Thank you for standing against the giants. 🚀


Want to join the fight?Star the repo · 🐛 Report issues · 💬 Share with your network · 🔨 Contribute on GitHub


What is Lutum Veritas?

Lutum Veritas is a self-hosted Deep Research Engine that transforms any question into a comprehensive research document. Unlike Perplexity, ChatGPT, or Google's AI Overview, you bring your own API key and everything runs locally on your machine - Windows, macOS, or Linux.

Why Use This?

Problem Lutum Veritas Solution
Expensive subscriptions Pay only for API tokens (~$0.08 per research)
Surface-level answers Deep multi-source analysis with 20+ sources per topic
Black-box results See every source, every step, full transparency
Bot detection blocks Camoufox scraper with 0% detection rate
No local control Runs 100% on your machine
Platform locked Works on Windows, macOS, and Linux

Features

🔬 Deep Research Pipeline

Your Question
     ↓
┌─────────────────────────────────────────────────────┐
│  1. CLARIFICATION                                   │
│     AI asks smart follow-up questions               │
├─────────────────────────────────────────────────────┤
│  2. RESEARCH PLAN                                   │
│     Creates structured investigation points         │
├─────────────────────────────────────────────────────┤
│  3. DEEP RESEARCH (per point)                       │
│     Think → Search → Pick URLs → Scrape → Dossier   │
├─────────────────────────────────────────────────────┤
│  4. FINAL SYNTHESIS                                 │
│     Cross-reference all findings into one document  │
└─────────────────────────────────────────────────────┘
     ↓
📄 Comprehensive Report (5,000-10,000+ words)

🎓 Academic Mode

Hierarchical research with autonomous areas:

  • Parallel Processing: Research areas independently
  • Meta-Synthesis: Find cross-connections between areas
  • Toulmin Argumentation: Structured academic reasoning
  • Evidence Grading: Rate source quality (Level I-VII)
  • Claim Audit Tables: Confidence ratings for every claim
  • 200,000+ character outputs: Full academic depth, no shortcuts

🎯 Ask Mode - NEW in v1.3.0

Quick answers. Verified facts. No hallucinations.

Ask Mode Demo - 6-Stage Pipeline with Verification

Ask Mode workflow: Question → C1-C6 stages → Verified Answer with Citations

The new Deep Question mode bridges the gap between chat and Deep Research. It's the tool you keep open when your question isn't "big enough" for a 20-minute deep dive, but you need more than an unverified chat response based on outdated, biased training data.

The difference:

  • Regular Chat: No verification. No live search. Answers from stale training data.
  • Ask Mode: Every answer is researched, sourced, and self-verified against a second round of sources.

When you need a real answer on the first try: this is your mode.

Features

  • 6-Stage Pipeline: Intent → Knowledge → Search → Scrape → Answer → Verify → Fact-Check (~70-90s)
  • Dual-Scraping Phases: First scrape for answer, second scrape for verification
  • Citation System: Inline citations [1], [2] for sources + [V1], [V2] for verification
  • Claim Auditing: Every claim is fact-checked against additional sources
  • Auto-Language Detection: Responds in same language as your question
  • Separate Sessions: Ask sessions stored separately from Deep Research

Cost

Cost? A joke. ~400 queries for $1.

Stage Cost per Query
C1: Intent Analysis $0.000839
C2: Knowledge Requirements $0.000245
C3: Search Strategy $0.000847
C4: Answer Synthesis $0.000158
C5: Claim Audit $0.000279
C6: Verification $0.000049
Total per Query ~$0.0024
  • 0.24 cents per answer
  • 416 verified answers for $1
  • Model: google/gemini-2.5-flash-lite-preview-09-2025

💻 Desktop App Features

Feature Description
One-Click Install Single installer, no separate backend needed
Live Progress Watch research happen in real-time
Session Management Save, rename, delete research sessions
Source Boxes Expandable boxes showing all scraped URLs
Citation Links Clickable [1] references to sources
Export Download as Markdown or PDF
Dark Mode System theme support
i18n German & English interface

🛡️ Zero Detection Scraping

Powered by Camoufox - a hardened Firefox fork that bypasses:

  • Cloudflare
  • DataDome
  • PerimeterX
  • Bloomberg, TCGPlayer, and most anti-bot systems

Installation

Option A: Download Windows Installer (Easiest for Windows)

Platform: Windows only Requirements: Python 3.11+ installed (python.org)

  1. Download Lutum Veritas_1.2.4_x64-setup.exe from Releases
  2. Run the installer
    • If Python is not found, the installer will prompt you to install it
    • Dependencies are installed automatically via pip
  3. Launch Lutum Veritas from your Start Menu
  4. Select your API Provider in Settings (OpenRouter, OpenAI, Anthropic, Google Gemini, or HuggingFace)
  5. Enter your API Key
  6. Start researching!

Note: The backend starts automatically when you open the app. No separate process to manage.

Option B: Install via uv/uvx (Cross-Platform)

Platform: Windows, macOS, Linux Requirements: uv package manager

This is the recommended method for macOS and Linux users, and also works great on Windows if you prefer command-line tools.

Install and Run:

# Option 1: Install as a persistent tool
uv tool install git+https://github.com/IamLumae/lutum-veritas.git

# Then run anytime with:
lutum-veritas
# Option 2: Run directly without installation (ephemeral)
uvx --from git+https://github.com/IamLumae/lutum-veritas.git lutum-veritas

Both commands will:

  • Install all dependencies automatically
  • Start the backend server on http://localhost:8420
  • Open your browser to the web interface

Tip: Use uv tool install if you plan to use Lutum regularly. Use uvx for one-time runs or testing.

Option C: Build from Source (Developers)

Platform: All platforms Requirements:

  • Python 3.11+
  • Node.js 18+
  • Rust (for Tauri)
# Clone
git clone https://github.com/IamLumae/lutum-veritas.git
cd lutum-veritas

# Backend
cd lutum-backend
pip install -r requirements.txt
python main.py

# Frontend (new terminal)
cd lutum-desktop
npm install
npm run tauri dev

Quick Start

  1. Launch App - Open Lutum Veritas (backend starts automatically)
  2. Select Provider - Settings → Choose OpenRouter, OpenAI, Anthropic, Gemini, or HuggingFace
  3. Enter API Key - Enter your API key for the selected provider
  4. Ask Anything - Type your research question
  5. Answer Clarifications - Help the AI understand your needs
  6. Review Plan - Approve or modify the research plan
  7. Click "Let's Go" - Watch the magic happen
  8. Export - Download your research as MD or PDF

How It Works

Architecture

┌─────────────────────────────────────────────────────────────┐
│                    LUTUM VERITAS DESKTOP                     │
│  ┌───────────────────────────────────────────────────────┐  │
│  │              Tauri Shell (Rust + WebView)              │  │
│  │         Auto-starts Python backend on launch           │  │
│  │  ┌─────────────────────────────────────────────────┐  │  │
│  │  │           React Frontend (TypeScript)           │  │  │
│  │  │  • Chat Interface     • Session Management      │  │  │
│  │  │  • Live Status        • Markdown Rendering      │  │  │
│  │  └─────────────────────────────────────────────────┘  │  │
│  └───────────────────────────────────────────────────────┘  │
│                          ↕ HTTP                              │
│  ┌───────────────────────────────────────────────────────┐  │
│  │              FastAPI Backend (Python)                  │  │
│  │  • Research Orchestrator    • LLM Integration         │  │
│  │  • Session Persistence      • SSE Streaming           │  │
│  │  ┌─────────────────────────────────────────────────┐  │  │
│  │  │         Camoufox Scraper (Firefox Fork)         │  │  │
│  │  │              0% Bot Detection Rate               │  │  │
│  │  └─────────────────────────────────────────────────┘  │  │
│  └───────────────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────────────┘

LLM Pipeline

Step Model Purpose
Think Gemini Flash Lite Generate search strategies
Pick URLs Gemini Flash Lite Select best sources
Dossier Gemini Flash Lite Analyze and summarize
Final Synthesis Qwen 235B Create comprehensive report

Supported Providers: OpenRouter (200+ models), OpenAI, Anthropic, Google Gemini, HuggingFace - bring your own API key.


Tech Stack

Component Technology
Desktop Shell Tauri 2.0 (Rust)
Frontend React 19 + TypeScript + Tailwind CSS
Backend FastAPI (Python 3.11)
Scraper Camoufox (Hardened Firefox)
LLMs Multi-Provider (OpenRouter, OpenAI, Anthropic, Gemini, HuggingFace)
Database File-based JSON (sessions)

Project Structure

lutum-veritas/
├── lutum/                      # Core Python library
│   ├── researcher/
│   │   └── prompts/            # LLM prompts (Think, Pick, Dossier, Synthesis)
│   └── scrapers/
│       └── camoufox_scraper.py # Zero-detection web scraper
├── lutum-backend/              # FastAPI server
│   └── routes/
│       └── research.py         # Research pipeline orchestrator
├── lutum-desktop/              # Tauri desktop app
│   ├── src/
│   │   ├── components/         # React components
│   │   ├── hooks/              # useBackend API hook
│   │   └── stores/             # Session state management
│   └── src-tauri/
│       ├── src/lib.rs          # Auto-start backend logic
│       └── nsis-hooks.nsh      # Installer: Python check + pip install
├── LICENSE                     # AGPL-3.0
├── NOTICE                      # Copyright & commercial licensing
└── README.md                   # You are here

API Reference

Endpoints

Endpoint Method Description
/health GET Backend health check
/research/overview POST Initial analysis & clarification questions
/research/plan POST Generate research plan
/research/plan/revise POST Modify plan based on feedback
/research/deep POST Execute deep research (SSE stream)
/research/academic POST Execute academic research (SSE stream)

SSE Events (Deep Research)

// Status updates
{"type": "status", "message": "Searching Google..."}

// Sources found
{"type": "sources", "urls": ["https://...", "https://..."]}

// Point completed
{"type": "point_complete", "point_title": "...", "key_learnings": "..."}

// Synthesis starting
{"type": "synthesis_start", "dossier_count": 5, "total_sources": 45}

// Research complete
{"type": "done", "data": {"final_document": "...", "source_registry": {...}}}

Cost Comparison

Real benchmark: 513k input tokens, 55k output tokens

Service Cost vs Lutum
Lutum Veritas $0.08 -
ChatGPT Plus $20/mo Subscription
Perplexity Pro $20/mo Subscription
OpenAI o3 $7.36 92x more
OpenAI o4-mini $1.44 18x more
Google Gemini Pro $2.95 37x more

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing)
  5. Open a Pull Request

Development Setup

# Backend (with hot reload)
cd lutum-backend
uvicorn main:app --reload --port 8420

# Frontend (with hot reload)
cd lutum-desktop
npm run tauri dev

License

Lutum Veritas is licensed under the GNU Affero General Public License v3.0.

This means:

  • ✅ Free to use, modify, and distribute
  • ✅ Commercial use allowed
  • ⚠️ Must disclose source code (including SaaS)
  • ⚠️ Modifications must use same license

Commercial Licensing

Need to use Lutum Veritas without AGPL obligations? Commercial licenses are available.

Contact: iamlumae@gmail.com


Security

v1.3.0 Installer:

  • VirusTotal: 0/65 detectionsClean
  • SHA256: 96faa40b63150632a96486086a2a778a4ec8a19b31dd06907d5178bb961fc287

Acknowledgments

  • Camoufox - The magic behind zero-detection scraping
  • Tauri - Lightweight desktop app framework
  • OpenRouter - Unified LLM API access

Built with obsessive attention to detail
Because truth shouldn't be locked behind paywalls

@IamLumae

Fuente: GitHub