Linggen is a local-first memory layer that gives AI persistent context
across repos, docs, and time. It integrates with Cursor / Zed via MCP
and keeps everything on-device.
I built this because I kept re-explaining the same context to AI
across multiple projects. Happy to answer any questions.
Good question. Linggen itself always runs locally.
When using Claude Desktop, it connects to Linggen via a local MCP server (localhost), so indexing and memory stay on-device. The LLM can query that local context, but Linggen doesn’t push your data to the cloud.
Claude’s web UI doesn’t support local MCP today — if it ever does, it would just be a localhost URL.
Of course, parts of the context (as decided by the MCP server, based on the context, no pun intended) are returned to claude which processes them on their servers.
Yes, that’s correct — the model only sees the retrieved slices that the MCP server explicitly returns, similar to pasting selected context into a prompt.
The distinction I’m trying to make is that Linggen itself doesn’t sync or store project data in the cloud; retrieval and indexing stay local, and exposure to the LLM is scoped and intentional.
That's fine, but it's a very different claim to the one you made at first.
In particular, I don't know which parts of my data might get sent to Claude, so even if I hope it's only a small fraction, anything could in principle be transmitted.
I do have a local model path (Qwen3-4B) for testing.
The tradeoff is simply model quality vs locality, which is why Linggen focuses on controlling retrieval rather than claiming zero data ever leaves the device. Using a local LLM is straightforward if that’s the requirement.
Linggen is a local-first memory layer that gives AI persistent context across repos, docs, and time. It integrates with Cursor / Zed via MCP and keeps everything on-device.
I built this because I kept re-explaining the same context to AI across multiple projects. Happy to answer any questions.