Carry-forward memory
Every chat app forgets. Context windows fill up, threads fork, models get deprecated. Lippa's carry-forward engine extracts a structured record of every project — so switching models, reopening a Space, or starting a new chat picks up exactly where you left off.
How it works
Long context is a bandaid. Our engine runs beside every conversation, building a canonical project state you can read, edit, and export — independent of whichever model is answering today.
Design principles
We don't stuff the context window. A small, recent, high-signal brief beats a long transcript every time — and costs less.
Facts, decisions, and open questions live as typed fields. Summaries drift; structure doesn't.
The memory graph is a page you can open. Edit a fact. Delete a decision. Pin something. It's your record.
Export as Markdown or JSON, any time. The continuation pack is a plain prompt — nothing magic, nothing locked in.
Memory is per-project. Your research brief doesn't leak into your board reply. What you told one Space stays in that Space.
Your conversations, your memory graph, your files — none of it is used to train Lippa or any third-party model. Period.
Privacy
A shared model is a borrowed brain. The point of having your own workspace is that the memory is actually yours — not a training input for someone else's next release.
Security
Audit in progress. Report available on request for Business customers from Q3 2026.
TLS 1.3 in transit, AES-256 at rest. Per-workspace encryption keys on Enterprise.
SAML SSO and SCIM provisioning on Business. Okta, Google Workspace, Azure AD supported.
Every model call, export, and membership change logged with actor, timestamp, and target.
Data Processing Addendum included by default on Business. Sub-processor list public and versioned.
Route sensitive chats through providers with zero-retention flags — enabled per-Space.
Every chat, every extracted fact, every decision — exportable at any time, in plain formats anyone can read.