Carry-forward memory

Memory that survives the tab you close.

Every chat app forgets. Context windows fill up, threads fork, models get deprecated. Lippa's carry-forward engine extracts a structured record of every project — so switching models, reopening a Space, or starting a new chat picks up exactly where you left off.

How it works

Not a longer context window. A different kind of memory.

Long context is a bandaid. Our engine runs beside every conversation, building a canonical project state you can read, edit, and export — independent of whichever model is answering today.

01 · Observe
Every turn is watched, not intercepted. Lippa records the raw exchange between you and the model — the full text, the attachments, the model used, the cost. Nothing is re-processed yet.
02 · Extract
A small, cheap model distils structure. Facts ("the board meets monthly"), decisions ("we went with option A"), open questions, named entities, pinned artefacts. The output is a typed record, not prose.
03 · Reconcile
New facts merge with old ones. Contradictions are flagged ("last week you said B; today you said A — which is current?"). You can accept, reject, or keep both in history.
04 · Pack
A continuation pack for any model. When you start a new chat or switch models mid-thread, Lippa assembles a brief tuned to the next model's prompt format. Facts first, then decisions, then the open thread.
05 · Continue
The new model picks up mid-sentence. No "let me catch you up" turn. No re-pasting transcripts. The continuation pack travels silently in the system prompt.

Design principles

Five rules the memory engine follows.

Recency beats volume

We don't stuff the context window. A small, recent, high-signal brief beats a long transcript every time — and costs less.

Structured, not summarised

Facts, decisions, and open questions live as typed fields. Summaries drift; structure doesn't.

Reviewable by humans

The memory graph is a page you can open. Edit a fact. Delete a decision. Pin something. It's your record.

Portable, not proprietary

Export as Markdown or JSON, any time. The continuation pack is a plain prompt — nothing magic, nothing locked in.

Scoped by Space

Memory is per-project. Your research brief doesn't leak into your board reply. What you told one Space stays in that Space.

No training, ever

Your conversations, your memory graph, your files — none of it is used to train Lippa or any third-party model. Period.

Privacy

Your memory, under your control.

A shared model is a borrowed brain. The point of having your own workspace is that the memory is actually yours — not a training input for someone else's next release.

No training
We don't train on your data, and our providers are contractually barred from doing so on requests routed through Lippa.
Full export
Download every chat, file, and memory entry as Markdown, JSON, or plain text. One click, your whole workspace.
Delete means delete
Remove a chat, a Space, or your entire account. 30-day grace period, then gone from backups too.
EU residency
Business plans can pin data to EU-hosted infrastructure — including EU-resident model endpoints where available.

Security

Serious about the boring parts.

SOC 2 Type II

Audit in progress. Report available on request for Business customers from Q3 2026.

Encryption

TLS 1.3 in transit, AES-256 at rest. Per-workspace encryption keys on Enterprise.

SSO & SCIM

SAML SSO and SCIM provisioning on Business. Okta, Google Workspace, Azure AD supported.

Audit logs

Every model call, export, and membership change logged with actor, timestamp, and target.

GDPR & DPA

Data Processing Addendum included by default on Business. Sub-processor list public and versioned.

Zero-retention routes

Route sensitive chats through providers with zero-retention flags — enabled per-Space.

The memory is yours. Take it with you.

Every chat, every extracted fact, every decision — exportable at any time, in plain formats anyone can read.