Skip to content

Ledger Pattern

Append-only Fact storage with single-writer guarantees. The foundation for all z0 state.

Prerequisites: PRINCIPLES.md, PRIMITIVES.md, durable-objects.md


The ledger pattern is how z0 stores Facts. Every Fact is appended to a ledger, never modified, never deleted. The ledger is the source of truth for everything that happened.

PropertyImplementation
Append-onlyINSERT only, no UPDATE or DELETE
Single-writerOne DO per Entity serializes all writes
ImmutableFacts never change after creation
ReplayableCached state can be rebuilt from Facts
ReplicatedFacts flow from DO to D1 for cross-entity queries

Key Insight: The ledger is not a pattern we impose—it emerges naturally from Principle 2 (Facts Are Immutable) and the per-entity DO architecture.


Each Entity with economic activity gets a ledger stored in its Durable Object’s SQLite database.

Entity TypeHas LedgerWhy
accountYesAccumulates charges, costs, payments
assetYesAccumulates invocations, outcomes
contactYesAccumulates touches, identity events
dealYesAccumulates outcomes, lifecycle events
campaignNoContainer only, no direct Facts
toolNoGlobal, Facts reference tool_id
vendorNoContainer only
userNoGlobal, Facts reference user_id
DO: account_acct_123
└── SQLite
├── entity (1 row: the Entity record)
├── facts (N rows: the ledger)
├── configs (M rows: versioned Configs)
└── cached_state (K rows: derived state)

The facts table IS the ledger. Everything else supports it.


1. Request arrives at Worker
2. Worker routes to Entity's DO
3. DO validates the Fact
4. DO appends to local SQLite (single-threaded, no race)
5. DO updates cached state
6. DO schedules replication alarm
7. Response returns to caller
async appendFact(fact: Fact): Promise<Fact> {
// 1. Generate ID and timestamp if not provided
fact.id = fact.id ?? generateFactId();
fact.timestamp = fact.timestamp ?? Date.now();
// 2. Resolve Config reference (for Facts that need it)
if (requiresConfig(fact.type)) {
const config = await this.resolveConfig(fact.type);
fact.config_id = config.id;
fact.config_version = config.version;
}
// 3. Append to ledger (single INSERT, no transaction needed)
await this.sql.exec(`
INSERT INTO facts (id, type, subtype, timestamp, tenant_id, data)
VALUES (?, ?, ?, ?, ?, ?)
`, [
fact.id,
fact.type,
fact.subtype,
fact.timestamp,
fact.tenant_id,
JSON.stringify(fact)
]);
// 4. Update cached state
await this.updateCachedState(fact);
// 5. Schedule replication
await this.scheduleReplication();
return fact;
}

Single-threaded DO means:

  • No concurrent writes to same ledger
  • Each append is atomic by default
  • No need for explicit transactions on single-row inserts

For operations that may retry, include an idempotency key:

async appendFactIdempotent(fact: Fact, idempotencyKey: string): Promise<Fact> {
// Check if already exists
const existing = await this.sql.exec(
`SELECT * FROM facts WHERE json_extract(data, '$.idempotency_key') = ?`,
[idempotencyKey]
).first();
if (existing) {
return JSON.parse(existing.data); // Return existing, don't duplicate
}
fact.data = { ...fact.data, idempotency_key: idempotencyKey };
return this.appendFact(fact);
}

Query the DO’s local SQLite for fast, consistent reads:

// Recent Facts for this entity
async getRecentFacts(limit: number = 100): Promise<Fact[]> {
const rows = await this.sql.exec(
`SELECT data FROM facts ORDER BY timestamp DESC LIMIT ?`,
[limit]
);
return rows.map(r => JSON.parse(r.data));
}
// Facts by type
async getFactsByType(type: string): Promise<Fact[]> {
const rows = await this.sql.exec(
`SELECT data FROM facts WHERE type = ? ORDER BY timestamp`,
[type]
);
return rows.map(r => JSON.parse(r.data));
}
// Facts in time range
async getFactsInRange(start: number, end: number): Promise<Fact[]> {
const rows = await this.sql.exec(
`SELECT data FROM facts WHERE timestamp >= ? AND timestamp <= ? ORDER BY timestamp`,
[start, end]
);
return rows.map(r => JSON.parse(r.data));
}

For queries spanning multiple entities, use D1 (the replicated projection):

-- Total charges by campaign this month
SELECT campaign_id, SUM(json_extract(data, '$.amount')) as total
FROM facts
WHERE tenant_id = ?
AND type = 'charge'
AND timestamp >= ?
GROUP BY campaign_id;
-- All outcomes for a contact
SELECT * FROM facts
WHERE tenant_id = ?
AND contact_id = ?
AND type = 'outcome'
ORDER BY timestamp;
Query TypeLocationWhy
Single entity, recentDOFastest, consistent
Single entity, all historyDOComplete ledger
Cross-entity aggregationD1Only place with all data
Tenant dashboardD1Cross-entity by design
Real-time budget checkDOMust be consistent
Historical reportingD1Optimized for analytics

Facts must be replicated from DO SQLite to D1 for cross-entity queries.

DO SQLite (source of truth)
│ 1. Fact appended
Replication Queue
│ 2. Batch sent to D1
D1 (queryable projection)

Each Fact row tracks whether it’s been replicated:

CREATE TABLE facts (
id TEXT PRIMARY KEY,
type TEXT NOT NULL,
subtype TEXT,
timestamp INTEGER NOT NULL,
tenant_id TEXT NOT NULL,
data TEXT NOT NULL,
replicated_at INTEGER -- NULL until replicated
);
CREATE INDEX idx_unreplicated ON facts(replicated_at) WHERE replicated_at IS NULL;
async alarm(): Promise<void> {
// Get unreplicated Facts
const unreplicated = await this.sql.exec(
`SELECT id, data FROM facts WHERE replicated_at IS NULL ORDER BY timestamp LIMIT 100`
);
if (unreplicated.length === 0) return;
// Send to Queue for D1 insertion
await this.env.REPLICATION_QUEUE.send({
entity_id: this.entityId,
facts: unreplicated.map(r => JSON.parse(r.data))
});
// Mark as replicated
const ids = unreplicated.map(r => r.id);
await this.sql.exec(
`UPDATE facts SET replicated_at = ? WHERE id IN (${ids.map(() => '?').join(',')})`,
[Date.now(), ...ids]
);
// Continue if more remain
const remaining = await this.sql.exec(
`SELECT COUNT(*) as c FROM facts WHERE replicated_at IS NULL`
).first();
if (remaining.c > 0) {
await this.storage.setAlarm(Date.now() + 100);
}
}
SourceConsistencyUse For
DO SQLiteStrong (single-writer)Writes, budget checks, real-time
D1Eventually consistentReporting, dashboards, search

Latency: Replication typically completes within 100-500ms. For queries requiring strong consistency, always query the DO directly.


Cached state (BudgetState, CapState, etc.) is derived from Facts. If it diverges, replay the ledger.

async rebuildCachedState(stateType: string): Promise<void> {
// 1. Get all relevant Facts
const facts = await this.getFactsForState(stateType);
// 2. Initialize empty state
let state = initializeState(stateType);
// 3. Replay each Fact
for (const fact of facts) {
state = applyFact(state, fact);
}
// 4. Store rebuilt state
await this.setCachedState(stateType, state, facts[facts.length - 1]?.id);
// 5. Record reconciliation Fact
await this.appendFact({
type: 'reconciliation',
subtype: 'cache_rebuilt',
data: {
cache_type: stateType,
facts_scanned: facts.length,
duration_ms: Date.now() - startTime
}
});
}
function applyFactToBudgetState(state: BudgetState, fact: Fact): BudgetState {
switch (fact.type) {
case 'charge':
return { ...state, spent: state.spent + fact.amount };
case 'deposit':
return { ...state, deposited: state.deposited + fact.amount };
case 'credit_issued':
return { ...state, credits: state.credits + fact.amount };
default:
return state;
}
}
// Remaining budget = deposited + credits - spent
function calculateRemaining(state: BudgetState): number {
return state.deposited + state.credits - state.spent;
}
TriggerAction
Reconciliation alarm detects mismatchAuto-rebuild, record reconciliation Fact
Manual admin requestRebuild, log reason
DO restart with corrupted stateRebuild from ledger
Schema migrationRebuild all affected state types

These must always hold:

// Facts are append-only
∀ Fact → never updated after creation
∀ Fact → never deleted
// Facts have required fields
∀ Fact → id ≠ null AND timestamp ≠ null AND type ≠ null
// Economic Facts trace to Configs
∀ Fact(charge | payout | outcome) → config_id ≠ null AND config_version ≠ null
// Costs trace to invocations
∀ Fact(cost) → ∃ Fact(invocation) WHERE cost.source_id = invocation.id
// Charges trace to outcomes
∀ Fact(charge) → ∃ Fact(outcome) WHERE charge.source_id = outcome.id
// Ledger is replayable
∀ CachedState → can be rebuilt from Facts

Old Facts can be archived to R2 while maintaining ledger integrity.

1. Identify Facts older than retention period
2. Export to Parquet, upload to R2
3. Write lifecycle Fact recording the archive
4. Remove from D1 (projection only)
5. Keep reference in DO (ledger integrity)

The DO keeps a minimal reference to archived Facts:

CREATE TABLE archived_facts (
id TEXT PRIMARY KEY,
timestamp INTEGER NOT NULL,
type TEXT NOT NULL,
r2_key TEXT NOT NULL, -- Where to find it
archived_at INTEGER NOT NULL
);

This allows:

  • Ledger replay to include archived Facts (fetch from R2)
  • Queries to know archived data exists
  • Complete audit trail

Wrong:

await this.sql.exec(
`UPDATE facts SET data = ? WHERE id = ?`,
[newData, factId]
);

Right:

// Create correction Fact instead
await this.appendFact({
type: 'correction',
subtype: 'outcome_revised',
source_id: originalFactId,
data: { previous: oldValue, corrected: newValue, reason: '...' }
});

Wrong:

await this.sql.exec(`DELETE FROM facts WHERE id = ?`, [factId]);

Right:

// Facts are never deleted. Archive if needed, but ledger reference remains.

Wrong:

// Just use cached budget without validation
const budget = await this.getCachedState('BudgetState');
if (budget.remaining >= amount) proceed();

Right:

// For critical decisions, verify or rebuild
const budget = await this.getCachedState('BudgetState');
if (budget.lastReconciled < Date.now() - 5 * 60 * 1000) {
await this.reconcileBudgetState();
}
if (budget.remaining >= amount) proceed();

Wrong:

// Writing to another entity's ledger
const otherDO = env.ACCOUNT_LEDGER.get(otherAccountId);
await otherDO.appendFact(fact); // Breaks single-writer

Right:

// Each DO writes only to its own ledger
// Use sagas/workflows for cross-entity operations

ConceptImplementation
Ledger locationDO SQLite per Entity
Write guaranteeSingle-threaded, no races
ImmutabilityINSERT only, no UPDATE/DELETE
Cross-entity queriesReplicate to D1
Cached stateDerived from ledger, rebuildable
ArchivalR2 for cold storage, keep DO reference

The ledger pattern implements Principles 2 (Facts Are Immutable) and 7 (Derived State Is Disposable). The DO SQLite is the source of truth. D1 is a queryable projection. Cached state is an optimization that can always be rebuilt by replaying the ledger.