Skip to content

Durable Object Architecture

Complete DO class hierarchy, sharding strategy, and coordination patterns for z0.

Prerequisites: PRINCIPLES.md, PRIMITIVES.md, durable-objects.md, ledger-pattern.md


z0’s Durable Object architecture implements per-entity ledgers with single-writer guarantees. Each entity type that participates in economic activity gets its own DO class, creating a natural sharding strategy that scales writes linearly with entity count.

PropertyImplementation
ShardingPer-entity (account, asset, contact, deal)
Single-writerOne DO per entity_id, serialized writes
Hot pathIn-memory cache → SQLite cached_state → compute
ReconciliationAlarm-based, periodic verification
ReplicationDO SQLite → Queue → D1

Key Insight: The DO architecture emerges from Principle 2 (Facts Are Immutable) and Principle 7 (Derived State Is Disposable). DOs are the source of truth; D1 is a queryable projection.


z0 uses four primary DO classes, each implementing the per-entity ledger pattern.

ClassEntitiesPurposeHot Path
AccountDOaccountBudget tracking, billingBudget checks, prepaid balance
AssetDOassetInvocation tracking, capsEligibility, cap checks
ContactDOcontactIdentity resolution, touchesCanonical ID lookup
DealDOdealOutcome trackingDeal status queries
{namespace}_{entity_id}

Examples:

  • account_acct_abc123 — AccountDO for account acct_abc123
  • asset_ast_xyz789 — AssetDO for asset ast_xyz789
  • contact_con_def456 — ContactDO for contact con_def456
  • deal_deal_ghi012 — DealDO for deal deal_ghi012

Rationale: Entity IDs are globally unique with type prefixes (e.g., acct_, ast_). DO IDs prepend namespace to ensure uniqueness within Cloudflare’s DO namespace while preserving entity semantics.

Entity TypeWhy No DOWhere Data Lives
campaignContainer only, no direct FactsParent account’s DO
toolGlobal, sharedStatic config in KV/D1
vendorContainer onlyStatic config in KV/D1
userGlobal, access via FactsAccessState in affected entity DOs

Note: Campaigns are container entities. They scope Configs and provide context, but Facts reference campaign_id and are stored in the relevant entity’s DO (account, asset, etc.).


Tracks economic activity for an account: charges, costs, payments, deposits.

  1. Budget tracking — BudgetState for RTB eligibility (Principle 10)
  2. Prepaid balance — PrepaidBalance for deposit/charge accounting
  3. AR/AP tracking — ARBalance, APBalance for billing
  4. Account-scoped Configs — Pricing, budget, billing Configs
  5. Financial Facts — charge, cost, payout, deposit, payment_* Facts
-- Entity record
CREATE TABLE entity (
id TEXT PRIMARY KEY,
type TEXT NOT NULL, -- 'account'
subtype TEXT NOT NULL, -- 'platform', 'tenant', 'customer', 'publisher'
tenant_id TEXT,
data TEXT NOT NULL, -- JSON blob of full Entity
updated_at INTEGER NOT NULL
);
-- Facts ledger
CREATE TABLE facts (
id TEXT PRIMARY KEY,
type TEXT NOT NULL,
subtype TEXT,
timestamp INTEGER NOT NULL,
tenant_id TEXT NOT NULL,
source_id TEXT,
entity_id TEXT,
from_entity TEXT,
to_entity TEXT,
amount INTEGER, -- cents
currency TEXT,
config_id TEXT,
config_version INTEGER,
data TEXT NOT NULL, -- JSON blob of full Fact
replicated_at INTEGER -- NULL until replicated to D1
);
CREATE INDEX idx_facts_type_ts ON facts(type, timestamp);
CREATE INDEX idx_facts_unreplicated ON facts(replicated_at) WHERE replicated_at IS NULL;
CREATE INDEX idx_facts_config ON facts(config_id, config_version) WHERE config_id IS NOT NULL;
-- Cached state
CREATE TABLE cached_state (
key TEXT PRIMARY KEY, -- 'BudgetState', 'PrepaidBalance', 'ARBalance', 'APBalance'
value TEXT NOT NULL, -- JSON blob
computed_at INTEGER NOT NULL,
facts_through TEXT -- Last fact_id included
);
-- Configs
CREATE TABLE configs (
id TEXT NOT NULL,
version INTEGER NOT NULL,
type TEXT NOT NULL,
category TEXT NOT NULL, -- 'policy' or 'logic'
applies_to TEXT NOT NULL,
scope TEXT NOT NULL, -- 'account', 'campaign', 'asset'
settings TEXT NOT NULL, -- JSON blob
effective_at INTEGER NOT NULL,
superseded_at INTEGER,
PRIMARY KEY (id, version)
);
CREATE INDEX idx_configs_active ON configs(id) WHERE superseded_at IS NULL;
CREATE INDEX idx_configs_type_scope ON configs(type, scope, applies_to);
async checkBudgetAvailability(amount: number): Promise<boolean> {
// O(1) in-memory cache check
if (this.hotBudgetState && !this.isStale(this.hotBudgetState)) {
return this.hotBudgetState.remaining >= amount;
}
// Fallback to SQLite cached_state
const cached = await this.getCachedState('BudgetState');
this.hotBudgetState = { ...cached, cachedAt: Date.now() };
return cached.remaining >= amount;
}
private isStale(state: HotState): boolean {
return Date.now() - state.cachedAt > 60_000; // 1 minute TTL
}

Target latency: < 10ms for hot path (in-memory hit), < 50ms for cold path (SQLite).

TypePurpose
chargeAR — money owed to us by customer
costAP — money owed to vendor for tool usage
payoutAP — money owed to publisher partner
depositPrepaid funds received
payment_receivedAR payment received
payment_sentAP payment sent
credit_issuedAdjustment/refund
invoice_createdAR snapshot
invoice_closedInvoice finalized
statement_createdAP snapshot
statement_closedStatement finalized
interface BudgetState {
deposited: number; // Sum of deposit Facts
spent: number; // Sum of charge Facts
credits: number; // Sum of credit_issued Facts
pending: number; // Temporary holds during RTB
remaining: number; // deposited + credits - spent - pending
last_fact_id: string;
computed_at: number;
}
interface PrepaidBalance {
balance: number; // deposited - spent
last_fact_id: string;
computed_at: number;
}
interface ARBalance {
balance: number; // charges - payments - credits
charges: number;
payments: number;
credits: number;
last_fact_id: string;
computed_at: number;
}
interface APBalance {
balance: number; // payouts + costs - payments
payouts: number;
costs: number;
payments: number;
last_fact_id: string;
computed_at: number;
}

Tracks invocations and outcomes for an asset (phone number, form, link, etc.).

  1. Invocation tracking — Log all tool invocations
  2. Outcome tracking — Record business outcomes (qualified, booked, etc.)
  3. Cap enforcement — CapState for rate limiting
  4. Asset-scoped Configs — Pricing, qualification, routing Configs
  5. RTB eligibility — Check budget + caps + hours + status

Similar structure to AccountDO with entity-specific adjustments:

-- Entity record
CREATE TABLE entity (
id TEXT PRIMARY KEY,
type TEXT NOT NULL, -- 'asset'
subtype TEXT NOT NULL, -- 'phone_number', 'form', 'link', etc.
owner_id TEXT NOT NULL, -- Account that owns this asset
operated_by TEXT, -- Tool that operates this asset
tenant_id TEXT NOT NULL,
data TEXT NOT NULL,
updated_at INTEGER NOT NULL
);
-- Facts ledger
CREATE TABLE facts (
id TEXT PRIMARY KEY,
type TEXT NOT NULL,
subtype TEXT,
timestamp INTEGER NOT NULL,
tenant_id TEXT NOT NULL,
source_id TEXT,
asset_id TEXT, -- Self-reference
campaign_id TEXT,
contact_id TEXT,
deal_id TEXT,
tool_id TEXT,
parent_invocation_id TEXT, -- For workflow chaining
config_id TEXT,
config_version INTEGER,
data TEXT NOT NULL,
replicated_at INTEGER
);
CREATE INDEX idx_facts_type_ts ON facts(type, timestamp);
CREATE INDEX idx_facts_campaign ON facts(campaign_id) WHERE campaign_id IS NOT NULL;
CREATE INDEX idx_facts_contact ON facts(contact_id) WHERE contact_id IS NOT NULL;
CREATE INDEX idx_facts_parent ON facts(parent_invocation_id) WHERE parent_invocation_id IS NOT NULL;
-- Cached state
CREATE TABLE cached_state (
key TEXT PRIMARY KEY, -- 'CapState', 'AvailabilityState'
value TEXT NOT NULL,
computed_at INTEGER NOT NULL,
facts_through TEXT
);
-- Configs
-- (Same schema as AccountDO)
async checkEligibility(campaignId: string): Promise<EligibilityResult> {
// In-memory hot state
if (this.hotAvailability && !this.isStale(this.hotAvailability)) {
return this.evaluateEligibility(this.hotAvailability, campaignId);
}
// Build eligibility from multiple sources
const [entity, capState, budgetAvailable, withinHours] = await Promise.all([
this.getEntity(),
this.getCachedState('CapState'),
this.checkCampaignBudget(campaignId), // Cross-DO call to campaign's account
this.checkOperatingHours(campaignId)
]);
const availability = {
status: entity.status,
budget_available: budgetAvailable,
cap_available: capState.current_count < capState.limit,
within_hours: withinHours,
cachedAt: Date.now()
};
this.hotAvailability = availability;
return this.evaluateEligibility(availability, campaignId);
}
private evaluateEligibility(state: AvailabilityState, campaignId: string): EligibilityResult {
const reasons = [];
if (state.status !== 'active') reasons.push('asset_inactive');
if (!state.budget_available) reasons.push('insufficient_budget');
if (!state.cap_available) reasons.push('cap_exceeded');
if (!state.within_hours) reasons.push('outside_hours');
return {
eligible: reasons.length === 0,
reasons
};
}

Target latency: < 10ms for RTB hot path.

TypePurpose
invocationTool was called (inbound_call, form_submit, etc.)
outcomeBusiness state resolved (qualified, booked, won, lost)
interface CapState {
period_start: number; // Current period start timestamp
period_duration: number; // Period length in ms (e.g., 86400000 for daily)
cap_type: string; // 'daily_calls', 'hourly_calls', etc.
current_count: number; // Invocations in current period
cap_limit: number; // Max allowed in period
reset_at: number; // When period resets
last_fact_id: string;
computed_at: number;
}
interface AvailabilityState {
available: boolean; // Overall eligibility
status: string; // Entity status
budget_available: boolean; // Campaign has budget
cap_available: boolean; // Asset hasn't hit cap
within_hours: boolean; // Within operating hours
reasons: string[]; // Why unavailable (if any)
last_fact_id: string;
computed_at: number;
}

AssetDO must check campaign budget during RTB, which lives in the campaign’s parent AccountDO.

async checkCampaignBudget(campaignId: string): Promise<boolean> {
// Resolve campaign to account
const campaign = await this.resolveCampaign(campaignId); // D1 or stub call
// Stub call to AccountDO
const accountDO = this.env.ACCOUNT_LEDGER.get(
this.env.ACCOUNT_LEDGER.idFromName(`account_${campaign.parent_id}`)
);
const result = await accountDO.fetch(new Request('https://fake/budget/check', {
method: 'POST',
body: JSON.stringify({ campaign_id: campaignId })
}));
const { available } = await result.json();
return available;
}

Performance note: Stub calls add latency (~5-20ms depending on geography). Cache the result in hot state with short TTL (30-60 seconds).


Tracks identity resolution and attribution for a contact.

  1. Identity resolution — ContactCanonical for merge tracking
  2. Touch tracking — All interactions with this contact
  3. Attribution chain — Link contacts to deals
  4. Contact-scoped Facts — Touches, merges, identity events
-- Entity record
CREATE TABLE entity (
id TEXT PRIMARY KEY,
type TEXT NOT NULL, -- 'contact'
subtype TEXT,
tenant_id TEXT NOT NULL,
external_id TEXT, -- CRM ID (HubSpot, Salesforce)
external_source TEXT, -- 'hubspot', 'salesforce'
data TEXT NOT NULL,
updated_at INTEGER NOT NULL
);
-- Facts ledger
CREATE TABLE facts (
id TEXT PRIMARY KEY,
type TEXT NOT NULL,
subtype TEXT,
timestamp INTEGER NOT NULL,
tenant_id TEXT NOT NULL,
contact_id TEXT, -- Self-reference
deal_id TEXT,
asset_id TEXT,
campaign_id TEXT,
data TEXT NOT NULL,
replicated_at INTEGER
);
CREATE INDEX idx_facts_type_ts ON facts(type, timestamp);
CREATE INDEX idx_facts_deal ON facts(deal_id) WHERE deal_id IS NOT NULL;
-- Cached state
CREATE TABLE cached_state (
key TEXT PRIMARY KEY, -- 'ContactCanonical'
value TEXT NOT NULL,
computed_at INTEGER NOT NULL,
facts_through TEXT
);
async getCanonicalContactId(contactId: string): Promise<string> {
// In-memory hot cache
if (this.canonicalCache.has(contactId)) {
return this.canonicalCache.get(contactId);
}
// Query cached state
const canonical = await this.getCachedState('ContactCanonical');
if (!canonical) return contactId; // No merges, contact is canonical
// Cache and return
this.canonicalCache.set(contactId, canonical.canonical_id);
return canonical.canonical_id;
}

Critical requirement: ContactCanonical must be updated synchronously with contact_merged Fact append. Stale canonical IDs cause attribution errors.

TypePurpose
invocationTouch event (call received, form submitted)
outcomeBusiness outcome attributed to contact
contact_mergedIdentity merge event
interface ContactCanonical {
canonical_id: string; // The surviving contact_id
merged_ids: string[]; // All merged contact_ids (including canonical)
merge_chain: Array<{ // Full merge history
source_id: string;
target_id: string;
merged_at: number;
fact_id: string;
}>;
last_fact_id: string;
computed_at: number;
}

Critical Clarification: Cross-DO Merge Atomicity

Contact merges involve TWO contacts but execute in ONE DO:

  1. Merge Facts are written to the TARGET ContactDO only
  2. Source ContactDO remains unchanged — its canonical_id is resolved via lookup at query time
  3. No cross-DO transaction — this is by design (cross-DO atomicity is unsupported)

This works because:

  • The target ContactDO is authoritative for the merged identity
  • Any query for the source contact’s canonical_id looks up the target
  • The source ContactDO eventually learns it was merged during reconciliation (optional)
// This runs in the TARGET ContactDO
async mergeContacts(sourceId: string, targetId: string): Promise<void> {
// Validate: this IS the target DO
if (this.entityId !== targetId) {
throw new Error('Merge must be called on target ContactDO');
}
// Single atomic operation within THIS DO only
await this.sql.exec('BEGIN TRANSACTION');
try {
// 1. Append merge Fact (to target's ledger)
const mergeFact = await this.appendFact({
type: 'lifecycle',
subtype: 'contact_merged',
timestamp: Date.now(),
data: {
source_id: sourceId,
target_id: targetId,
canonical_id: targetId
}
});
// 2. Update ContactCanonical immediately (same transaction)
const existing = await this.getCachedState('ContactCanonical') || {
canonical_id: this.entityId,
merged_ids: [this.entityId],
merge_chain: []
};
await this.setCachedState('ContactCanonical', {
canonical_id: targetId,
merged_ids: [...new Set([...existing.merged_ids, sourceId])],
merge_chain: [
...existing.merge_chain,
{
source_id: sourceId,
target_id: targetId,
merged_at: Date.now(),
fact_id: mergeFact.id
}
],
last_fact_id: mergeFact.id,
computed_at: Date.now()
});
await this.sql.exec('COMMIT');
// 3. Notify source ContactDO (async, best-effort)
// Source DO can update its own canonical pointer on next access
await this.notifySourceOfMerge(sourceId, targetId, mergeFact.id);
} catch (error) {
await this.sql.exec('ROLLBACK');
throw error;
}
}
// Canonical resolution handles the source → target lookup
async resolveCanonical(contactId: string): Promise<string> {
// If this contact was merged INTO another, return that target
// If this contact has others merged into it, return self
// This handles both source and target lookups
const canonical = await this.getCachedState('ContactCanonical');
return canonical?.canonical_id ?? this.entityId;
}

Tracks lifecycle and outcomes for a deal (synced from external CRM).

  1. Deal lifecycle — Status changes, outcome tracking
  2. Outcome linking — Connect outcomes to deal
  3. Deal-scoped Facts — Outcomes, lifecycle events
-- Entity record
CREATE TABLE entity (
id TEXT PRIMARY KEY,
type TEXT NOT NULL, -- 'deal'
subtype TEXT,
tenant_id TEXT NOT NULL,
contact_id TEXT NOT NULL, -- Required: deals belong to contacts
external_id TEXT NOT NULL, -- Required: CRM deal ID
external_source TEXT NOT NULL, -- Required: 'hubspot', 'salesforce'
data TEXT NOT NULL,
updated_at INTEGER NOT NULL
);
-- Facts ledger
CREATE TABLE facts (
id TEXT PRIMARY KEY,
type TEXT NOT NULL,
subtype TEXT,
timestamp INTEGER NOT NULL,
tenant_id TEXT NOT NULL,
deal_id TEXT, -- Self-reference
contact_id TEXT NOT NULL, -- Required on deal Facts
campaign_id TEXT,
asset_id TEXT,
data TEXT NOT NULL,
replicated_at INTEGER
);
CREATE INDEX idx_facts_type_ts ON facts(type, timestamp);
CREATE INDEX idx_facts_contact ON facts(contact_id);
TypePurpose
outcomeDeal outcome (deal_created, deal_won, deal_lost)
lifecycleDeal status changed

Note: DealDO does not have complex cached state. Deals are simple lifecycle entities. Current state is just the latest outcome Fact.


Configs follow scope precedence: asset > campaign > account (from PRIMITIVES.md). Resolution uses DO-local-first with stub calls.

// Inside AssetDO
async resolveConfig(configType: string, campaignId?: string): Promise<Config> {
// 1. Check asset-scoped Config (own SQLite)
const assetConfig = await this.getActiveConfig(configType);
if (assetConfig) return assetConfig;
// 2. Check campaign-scoped Config (stub call to CampaignDO... but campaigns don't have DOs!)
// Actually: campaigns don't have DOs. Campaign Configs live in parent AccountDO.
if (campaignId) {
const campaign = await this.resolveCampaign(campaignId); // D1 lookup
const accountDO = this.env.ACCOUNT_LEDGER.get(
this.env.ACCOUNT_LEDGER.idFromName(`account_${campaign.parent_id}`)
);
const campaignConfig = await accountDO.getConfigForEntity(configType, campaignId);
if (campaignConfig) return campaignConfig;
}
// 3. Check account-scoped Config (stub call to AccountDO)
const entity = await this.getEntity();
const accountDO = this.env.ACCOUNT_LEDGER.get(
this.env.ACCOUNT_LEDGER.idFromName(`account_${entity.owner_id}`)
);
const accountConfig = await accountDO.getActiveConfig(configType);
if (accountConfig) return accountConfig;
// 4. No Config found
throw new Error(`No ${configType} Config found for asset ${this.entityId}`);
}

Resolved Configs are cached in-memory with short TTL to reduce stub call frequency.

private configCache = new Map<string, { config: Config; expiresAt: number }>();
async resolveConfigCached(configType: string, campaignId?: string): Promise<Config> {
const cacheKey = `${configType}:${campaignId || 'none'}`;
const cached = this.configCache.get(cacheKey);
if (cached && cached.expiresAt > Date.now()) {
return cached.config;
}
const config = await this.resolveConfig(configType, campaignId);
this.configCache.set(cacheKey, {
config,
expiresAt: Date.now() + 60_000 // 1 minute TTL
});
return config;
}

Trade-off: Short TTL (60 seconds) ensures Config changes propagate quickly while reducing cross-DO calls on hot path.


DOs are isolated by design. Cross-DO operations require explicit coordination.

For read operations where latency is acceptable.

// AssetDO checking budget in AccountDO
const accountDO = this.env.ACCOUNT_LEDGER.get(
this.env.ACCOUNT_LEDGER.idFromName(`account_${accountId}`)
);
const response = await accountDO.fetch(new Request('https://fake/budget/check', {
method: 'POST',
body: JSON.stringify({ campaign_id: campaignId, amount: estimatedCost })
}));
const { available } = await response.json();

Pros:

  • Synchronous, easy to reason about
  • Authoritative (reading source of truth)
  • No eventual consistency issues

Cons:

  • Adds latency (5-20ms per call)
  • Doesn’t scale to high-fanout scenarios
  • Both DOs must be available

Use when:

  • Hot path can tolerate latency
  • Result must be authoritative
  • 1-to-1 or 1-to-few coordination

For write operations that don’t need immediate consistency.

// AssetDO recording charge to AccountDO
await this.env.CHARGE_QUEUE.send({
entity_id: accountId,
campaign_id: campaignId,
amount: cost,
source_fact_id: invocationFact.id
});

Worker processes queue:

async function handleChargeMessage(message: ChargeMessage) {
const accountDO = env.ACCOUNT_LEDGER.get(
env.ACCOUNT_LEDGER.idFromName(`account_${message.entity_id}`)
);
await accountDO.appendFact({
type: 'charge',
source_id: message.source_fact_id,
amount: message.amount,
// ...
});
}

Pros:

  • Asynchronous, doesn’t block caller
  • Scales to high-fanout (1-to-many)
  • Retry/DLQ built into Queues

Cons:

  • Eventual consistency
  • No immediate confirmation
  • Requires idempotency

Use when:

  • Operation can be asynchronous
  • High fanout (1 event → N updates)
  • Eventual consistency acceptable

For operations where speed matters but correctness is verified later.

// AssetDO optimistically assumes budget available
async handleInvocation(invocation: Invocation): Promise<void> {
// 1. Optimistic append (no budget check)
const fact = await this.appendFact({
type: 'invocation',
// ...
});
// 2. Queue async budget verification
await this.env.VERIFICATION_QUEUE.send({
invocation_fact_id: fact.id,
campaign_id: invocation.campaign_id,
estimated_cost: invocation.estimated_cost
});
// 3. Response immediately (optimistic)
return;
}

Verification worker:

async function verifyBudget(message: VerificationMessage) {
const accountDO = env.ACCOUNT_LEDGER.get(...);
const budgetAvailable = await accountDO.checkBudget(message.campaign_id, message.estimated_cost);
if (!budgetAvailable) {
// Record budget violation, trigger alert
await accountDO.appendFact({
type: 'reconciliation',
subtype: 'budget_overrun_detected',
data: {
invocation_fact_id: message.invocation_fact_id,
overage_amount: message.estimated_cost
}
});
}
}

Pros:

  • Fast (no synchronous cross-DO call)
  • Eventually correct
  • Violation auditable

Cons:

  • Allows temporary inconsistency
  • Requires cleanup/alerting on violation
  • More complex than synchronous

Use when:

  • Hot path latency critical
  • Violations rare and tolerable
  • Auditability more important than prevention

For queries spanning multiple entities, use the D1 projection.

// Dashboard: total charges by campaign this month
const result = await this.env.D1.prepare(`
SELECT campaign_id, SUM(json_extract(data, '$.amount')) as total
FROM facts
WHERE tenant_id = ?
AND type = 'charge'
AND timestamp >= ?
GROUP BY campaign_id
`).bind(tenantId, startOfMonth).all();

Pros:

  • Single query, no cross-DO calls
  • Optimized for aggregation
  • Scales to large datasets

Cons:

  • Eventually consistent (replication lag)
  • Cannot write via D1
  • Stale data (100-500ms lag)

Use when:

  • Cross-entity aggregation
  • Reporting/dashboards
  • Eventual consistency acceptable

Each DO runs periodic reconciliation to verify cached state matches ledger truth.

class EntityLedger {
async alarm(): Promise<void> {
const alarmType = await this.storage.get('alarm_type');
switch (alarmType) {
case 'reconciliation':
await this.runReconciliation();
await this.scheduleReconciliation(); // Reschedule
break;
case 'replication':
await this.runReplication();
// Replication only fires when needed
break;
}
}
async scheduleReconciliation(): Promise<void> {
await this.storage.put('alarm_type', 'reconciliation');
await this.storage.setAlarm(Date.now() + 5 * 60 * 1000); // 5 minutes
}
async runReconciliation(): Promise<void> {
// Reconcile all cached states for this DO
const stateTypes = this.getCachedStateTypes(); // ['BudgetState', 'CapState', ...]
for (const stateType of stateTypes) {
await this.reconcileState(stateType);
}
}
async reconcileState(stateType: string): Promise<void> {
const startTime = Date.now();
// 1. Get cached value
const cached = await this.getCachedState(stateType);
// 2. Calculate from ledger
const calculated = await this.calculateFromLedger(stateType);
// 3. Compare
if (this.statesMatch(cached, calculated)) {
return; // All good
}
// 4. Record mismatch
await this.appendFact({
type: 'reconciliation',
subtype: 'mismatch_detected',
timestamp: Date.now(),
data: {
cache_type: stateType,
cached_value: cached,
calculated_value: calculated,
delta: this.computeDelta(cached, calculated),
resolution: 'cache_updated',
facts_scanned: await this.countFactsForState(stateType),
duration_ms: Date.now() - startTime
}
});
// 5. Update cache to match ledger
await this.setCachedState(stateType, calculated);
}
}

Frequency: Every 5 minutes per DO.

Performance: Reconciliation should complete in < 100ms for typical entities (< 10,000 Facts). For large entities, implement incremental reconciliation.


Facts replicate from DO SQLite to D1 for cross-entity queries.

1. Fact appended to DO SQLite
2. Replication alarm scheduled (debounced)
3. Alarm fires, batches unreplicated Facts
4. Facts sent to Queue
5. Queue consumer inserts to D1
6. DO marks Facts as replicated
async scheduleReplication(): Promise<void> {
// Debounce: only schedule if no alarm pending
const existingAlarm = await this.storage.getAlarm();
if (existingAlarm !== null) return;
await this.storage.put('alarm_type', 'replication');
await this.storage.setAlarm(Date.now() + 100); // 100ms debounce
}
async runReplication(): Promise<void> {
const BATCH_SIZE = 100;
// Get unreplicated Facts
const unreplicated = await this.sql.exec(`
SELECT id, data FROM facts
WHERE replicated_at IS NULL
ORDER BY timestamp
LIMIT ?
`, [BATCH_SIZE]);
if (unreplicated.length === 0) return;
// Send to Queue for D1 insertion
await this.env.REPLICATION_QUEUE.send({
entity_id: this.entityId,
facts: unreplicated.map(row => ({
id: row.id,
data: JSON.parse(row.data)
}))
});
// Mark as replicated
const ids = unreplicated.map(row => row.id);
await this.sql.exec(`
UPDATE facts
SET replicated_at = ?
WHERE id IN (${ids.map(() => '?').join(',')})
`, [Date.now(), ...ids]);
// Continue if more remain
const remaining = await this.sql.exec(
`SELECT COUNT(*) as count FROM facts WHERE replicated_at IS NULL`
).first();
if (remaining.count > 0) {
await this.storage.setAlarm(Date.now() + 100); // Immediate continuation
}
}
async function handleReplicationMessage(message: ReplicationMessage) {
const { entity_id, facts } = message;
// Batch insert to D1
const stmt = env.D1.prepare(`
INSERT INTO facts (id, type, subtype, timestamp, tenant_id, entity_id, data)
VALUES (?, ?, ?, ?, ?, ?, ?)
ON CONFLICT (id) DO NOTHING
`);
const batch = facts.map(fact =>
stmt.bind(
fact.data.id,
fact.data.type,
fact.data.subtype,
fact.data.timestamp,
fact.data.tenant_id,
entity_id,
JSON.stringify(fact.data)
)
);
await env.D1.batch(batch);
}

Latency: Typically 100-500ms from append to D1 availability.


Per-entity sharding scales writes linearly but requires understanding of access patterns.

Entity CountDOsWrite Throughput
1,0001,0001M req/sec (1k/DO)
10,00010,00010M req/sec
100,000100,000100M req/sec

Key insight: Each DO handles ~1,000 req/sec. Writes scale with entity count, not tenant count.

Scenario: High-volume asset (e.g., main call center number) receiving 500 calls/minute.

Bottleneck: Single DO serializes all writes.

Solutions:

  1. Shard by time window — Create sub-entities per hour/day:

    asset_ast_main_2026-01-17-14 (1-2 PM)
    asset_ast_main_2026-01-17-15 (2-3 PM)

    Aggregate at query time.

  2. Buffer in Queue — Write to Queue first, batch process:

    Invocation → Queue → Worker → AssetDO (batched)

    Reduces DO request rate.

  3. Accept serialization — 500/min = 8.3/sec, well within DO limits. Monitor and shard only if approaching 1,000/sec.

Recommendation: Monitor first. Shard if DO CPU time consistently > 10ms or request queue depth > 10.

Each tenant’s data is isolated by tenant_id on Facts. Cross-tenant queries are impossible without elevated API key.

Workers for Platforms: Each tenant gets isolated namespace. DOs are tenant-scoped by construction:

Tenant A namespace:
- account_acct_abc123
- asset_ast_xyz789
Tenant B namespace:
- account_acct_def456
- asset_ast_uvw012

No cross-contamination possible.


Wrong:

// Single DO handling all routing decisions
class GlobalRouter extends DurableObject {
async route(request: Request): Promise<Response> {
// Bottleneck: all traffic serialized
}
}

Right:

// Route to entity-specific DO
const assetDO = env.ASSET_LEDGER.get(
env.ASSET_LEDGER.idFromName(`asset_${assetId}`)
);
return assetDO.fetch(request);

Wrong:

// Trying to coordinate writes across DOs
async transferBudget(fromCampaign: string, toCampaign: string, amount: number) {
const from = getAccountDO(fromCampaign);
const to = getAccountDO(toCampaign);
// This doesn't work! No distributed transactions.
await from.debit(amount);
await to.credit(amount);
}

Right:

// Use saga pattern with compensating actions
async transferBudget(fromCampaign: string, toCampaign: string, amount: number) {
// 1. Reserve from source
const reservation = await from.reserveBudget(amount);
// 2. Credit destination
try {
await to.credit(amount, reservation.id);
} catch (error) {
// 3. Compensate: release reservation
await from.releaseReservation(reservation.id);
throw error;
}
// 4. Confirm debit
await from.confirmReservation(reservation.id);
}

Wrong:

// AccountDO caching state from multiple assets
class AccountDO {
private assetStates = new Map<string, AssetState>();
async updateAssetState(assetId: string, state: AssetState) {
this.assetStates.set(assetId, state);
}
}

Right:

// Each entity owns its own state
// Cross-entity queries go to D1
const assetStates = await env.D1.prepare(`
SELECT asset_id, json_extract(data, '$.status') as status
FROM cached_state
WHERE tenant_id = ? AND cache_type = 'AssetState'
`).bind(tenantId).all();

4. Long-Running Operations in Request Handler

Section titled “4. Long-Running Operations in Request Handler”

Wrong:

async fetch(request: Request): Promise<Response> {
// This times out at 30 seconds
for (const fact of allFacts) {
await this.processFactSlowly(fact);
}
return new Response('Done');
}

Right:

async fetch(request: Request): Promise<Response> {
// Queue the work
await this.storage.put('pending_work', allFacts);
await this.storage.setAlarm(Date.now());
return new Response('Queued');
}
async alarm(): Promise<void> {
const work = await this.storage.get('pending_work');
const batch = work.splice(0, 100);
for (const fact of batch) {
await this.processFactSlowly(fact);
}
if (work.length > 0) {
await this.storage.put('pending_work', work);
await this.storage.setAlarm(Date.now() + 10);
}
}

OperationTargetMeasured By
RTB eligibility check< 10msP99
Fact append< 5msP99
Config resolution (cached)< 1msP95
Config resolution (stub call)< 20msP99
Budget check (hot state)< 1msP95
Budget check (SQLite)< 5msP99
Reconciliation (per state)< 100msP95
Replication (per batch)< 200msP95
DO TypeTarget RPSNotes
AccountDO100/secMost are read-heavy
AssetDO500/secHigh-volume assets
ContactDO50/secMerge-heavy during sync
DealDO20/secLow-volume lifecycle

Alarm overhead: Each DO runs reconciliation every 5 minutes. At 10,000 DOs, that’s ~33 alarms/sec across the fleet. Negligible.


ConceptImplementation
DO classesAccountDO, AssetDO, ContactDO, DealDO
ShardingPer-entity (account, asset, contact, deal)
ID convention{namespace}_{entity_id}
Hot pathIn-memory → SQLite cached_state → compute
Cross-DOStub calls (sync), Queue (async), D1 (read)
ReconciliationAlarm every 5 minutes, < 100ms/state
ReplicationAlarm-triggered, batched to Queue → D1
Config resolutionAsset → Campaign (account) → Account

The DO architecture implements Principles 2 (Facts Are Immutable), 7 (Derived State Is Disposable), and 10 (Budget Is Eligibility). Per-entity sharding ensures writes scale linearly. Single-writer guarantees eliminate race conditions. Cached state enables sub-10ms RTB eligibility checks while reconciliation ensures eventual correctness.