Back to blog
Engineering
Mar 18, 20266 min read

We never used vector embeddings. Here's why.

For our target use case, a tree-structured index gives better transparency, simpler operations, and cleaner citations than a traditional vector database stack.

By 3meel.ai team
RAG
Indexing

Most RAG products in 2026 start from the same assumption: split documents into chunks, generate embeddings, store vectors, run similarity search, and hand the results to an LLM. It works, but we decided not to build 3meel.ai that way.

Our target use case is narrower: developers who want an AI assistant to reason over 10 to 100 PDFs, not millions of unrelated documents. That shifted the trade-offs.

The default vector workflow

  • Split a document into chunks
  • Generate embeddings for each chunk
  • Store the embeddings in a vector database
  • Embed the question and retrieve the nearest chunks
  • Use those chunks to synthesize an answer

That stack is proven, but it brings model-selection risk, another data service to operate, and a debugging story that often ends with cosine-similarity guesswork.

What we built instead

Root summary
  -> Chapter summary
    -> Section summary
      -> Page references

3meel.ai builds a hierarchical summary tree. Each node summarizes its children, and the leaves point back to real PDF pages. At query time we traverse the tree top-down until we reach the pages that matter.

Why we like the tree approach

  • Setup is simpler because there is no embedding pipeline or vector store to provision.
  • The tree is readable, so retrieval paths are easier to inspect and debug.
  • Citations fall out naturally because answers resolve to named sections and real pages.
  • The cost profile fits small to mid-sized knowledge bases well.

Where vectors still win

  • Very large corpora with vague semantic retrieval needs
  • Queries that need broad cross-document synthesis across huge datasets
  • Highly unstructured content with weak natural hierarchy

We are not arguing that vector search is wrong. We are arguing that it is often more infrastructure than this product category needs. For assistant-ready document sets, the tree approach is simpler, cheaper, and easier to explain.

More from the blog

Apr 4, 2026
Guide
Connect Claude Desktop to your documents in minutes

A quick setup guide for turning a knowledge base into an MCP server that Claude can query with page-level citations.

Apr 2, 2026
Announcement
Accounts, API keys, and billing are live

Pricing, authentication, usage controls, and key management now live under the same Blog section as product releases, but tracked as announcements instead of release notes.

Mar 28, 2026
Release
3meel.ai v1.0: document brains for MCP clients

The first platform release focuses on document upload, tree indexing, MCP tools, dashboard workflows, citations, and the core infrastructure behind them.