Ask your AI about any paper
in your Zotero library.

Your library, legible to every intelligence.

No more hallucinated citations. Biblion reads your Zotero database directly — sub-millisecond reads, no vector DB, no embeddings.

$ cargo install biblion
crates.io GitHub stars CI

You asked Claude for a citation and it hallucinated the reference. Biblion means your AI reads your actual library. Every citation is real. Every DOI resolves.

Watch it work.

Search, cite, and export — from your AI assistant.

biblion
claude>

No vector DB. The LLM is the semantic layer.

Every AI tool builds an embedding pipeline. Biblion just reads the database you already have.

  Typical RAG Biblion
Retrieval
Embeddings + vector DB
SQL query on local SQLite
Latency
100ms+ per query
< 1ms reads
Setup
Indexing pipeline, ingestion
cargo install, done
Network
Always required
Reads are fully local
Semantics
Pre-computed embeddings
The LLM itself

25 tools. One binary.

Everything you need to make your library addressable.

Search

Full-text search across titles, DOIs, abstracts, authors.

Cite

BibTeX, BibLaTeX, APA, IEEE — native generation, no plugins.

Export

Export entire collections as BibTeX. Formatted bibliographies.

Resolve PDFs

9 academic sources queried concurrently over the network.

Write

Create items, attach PDFs, merge duplicates. Via Zotero API, disabled by default.

Browse

Collections, recent items, attachments, notes — all indexed.

By the numbers.

25
MCP Tools
<1ms
Read Latency
9
PDF Sources
~6MB
Binary Size

Get started in 30 seconds.

Install, add to Claude Code, done. Zero config for reads. Write tools need a Zotero API key.

$ cargo install biblion
View on GitHub → API Docs →