Skip to main content

EB SDK v2 — Proposed Architecture Changes

StatusProposed

Decision Confidence Overview

AreaConfidenceStatusOpen QuestionsNotes
Package/class naming EB vs ZKP1/5Needs product decisionIs this SDK going to grow into main SDK supporting both EB and Shielded Pool/L3?Todo for Kacper: join shielded pool and eb SDKs
Factory pattern + DI model5/5Build nowTodo for Jakub Three interfaces, two-level split, platform factories.
Decomposing the wallet monolith4/5Build nowSplit ~4000 lines, feature modules, file structure.
Wallet API design3/5Design, iterateHow much do we care about bundle size/tree-shaking?My assumption is that 20kb more in sdk bundle is fine compared to 20mb of lazy loaded components.
Multi-chain design3/5Design, iterateHow do we envision multi-chain support requirements long-term?One client per network. Multi-network = multiple clients with shared backends via DI. Covers migration (2 deployments, same chain).
Error design2/5Needs design spikeTaxonomy is a starting point, needs work.
TreeState interface2/5Needs design spikeInterface shape and concurrency model need validation.
Telemetry2/5Needs design spikeNeeds more design and though, to cover privacy concenrs, and capture the right events for metrics.
AbortSignal support2/5Needs design spikeThis is useful when 4+ seconds loading/execution times for users on old devices/bad network
CDN asset integrity2/5Needs design spikeImportant for production long-term, probably fine for the time beeing
client.dispose()1/5DeprioritizeFor use-cases where SDK is used occasionally and resource cleanup matters (e.g. merchant webapp with a privacy wallet tab).

Table of Contents

  1. Build now — high confidence
  1. Needs product/leadership decision
  1. Design with current assumptions — medium confidence
  1. Needs design spike or product input — low confidence
  1. Deprioritize
  • [client.dispose() (1/5)](#clientdispose-15)

1. Build now — high confidence

Factory pattern + dependency injection (5/5)

Problem

Five global/singleton patterns exist today: registerNativeProver/Solver (module-level lets), globalProver, defaultSolver, setupMobileEB(), and createStorage()/createStakeStore() (which sniff typeof indexedDB). Alongside these, ~15 typeof checks are scattered through env.ts, prover.ts, dlog-solver.ts, l3/stake-store.ts, and stealth/store.ts for storage, circuit loading, WASM init, and solver threading.

The result: tests can't run in parallel, call ordering is fragile, and consumers can't substitute backends (e.g., MemoryStorage in a browser test) without patching source. Every other change in this document depends on globals being gone.

Change

All globals removed. Platform concerns expressed as three injected interfaces:

interface ISolverBackend {
init(): Promise<{ ready: true }>;
solve(xHex: string, yHex: string): Promise<SolveResult>;
dispose?(): void;
}

interface IProverBackend {
prove(request: ProveRequest): Promise<ProveResult>;
init?(): Promise<void>;
destroy?(): Promise<void>;
}

interface IStorageAdapter {
get(key: string): Promise<any | null>;
set(key: string, value: any): Promise<void>;
delete(key: string): Promise<void>;
clear(): Promise<void>;
}

Implementations: DlogSolver (WASM), NativeSolverAdapter (Rust FFI), WasmProverBackend (NoirJS + bb.js), MoProAdapter (native), MockSolverBackend/MockProverBackend (tests).

Two high-level services wrap backends with domain logic:

  • DecryptService wraps ISolverBackend — ElGamal decrypt + discrete log solve, lazy init.
  • ProofService wraps IProverBackend — circuit loading/caching, witness mapping, prewarm.

This two-level split (backend → service) keeps the client platform-agnostic. New platforms need only a backend impl.

Config shape

The config targets a single network. All deployment and feature config lives in NetworkConfig — no split across two levels. ClientConfig holds only infrastructure concerns (DI slots, telemetry, callbacks):

interface NetworkConfig {
chainId: number;
rpcUrl: string;
contract: `0x${string}`;
tokens: Record<string, `0x${string}`>;
relayer: { url: string; timeout?: number }; // required — all ops default to relayer submission
chain?: Chain; // viem Chain object (optional, inferred from chainId)
swap?: { registry: `0x${string}` };
l3?: {
stakeCircuit: CircuitSource;
unstakeCircuit: CircuitSource;
};
circuits?: { // override default circuit sources
transfer?: CircuitSource;
unshield?: CircuitSource;
};
}

interface ClientConfig {
network?: NetworkConfig; // defaults to NETWORKS.BASE

// DI slots (factories provide defaults per platform)
solver?: ISolverBackend;
prover?: IProverBackend;
storage?: IStorageAdapter;
stakeStoreFactory?: (id: string) => IStakeStore;
threads?: number;

// Telemetry (design TBD — see section 3)
telemetry?: boolean | { endpoint: string };
onTelemetry?: (event: TelemetryEvent) => void;

// Callbacks
onProgress?: (...) => void;
onLog?: (...) => void;
onAssetLoading?: (...) => void;
}

NetworkConfig owns all deployment and feature config. Feature availability is determined entirely by its shape: swap is enabled when network.swap exists, l3 when network.l3 exists. Relayer is always required — every EB operation defaults to relayer submission (gasless), and stealth operations cannot function without one. The non-relayer codepath exists as a fallback for direct submission but is not the default.

ClientConfig holds only infrastructure: DI slots for platform backends, telemetry, and callbacks. No feature config lives here.

Platform factories

Multi-step creation collapses to one async call per platform:

const client = await createWebClient({ network: NETWORKS.BASE });
const client = await createNodeClient({ network: NETWORKS.BASE });
const client = await createMobileClient(); // zero-config defaults to NETWORKS.BASE

Each factory selects backends (web: WASM + bb.js + IndexedDB; node: FileStorage; mobile: MoPro + native solver + MMKV), applies NETWORKS.BASE default, initializes services, creates network resources, and returns a ready client.

Assumption: most consumers operate on one network. Zero-config defaults to Base. Multi-network scenarios use multiple clients with shared backends (see multi-chain section).

Tradeoffs

Hard break for all consumers. One extra indirection hop (wallet → service → backend). ProofService owns circuit-specific logic, which may need to shift for backend-side optimization. Three factories to maintain. Direct client construction still works for custom setups — "two ways to create" needs clear documentation.


Decomposing the monolith (4/5)

Problem

client.ts is 3965 lines: EBClient (~~1150), EBWallet (~~2340), plus inline types, constants, ABIs, and utilities. Any feature change touches this file. EBWallet mixes all features in one class with shared private state — wallet.stake() alone is 240 lines interleaving pure computation with side effects, untestable without full integration setup.

Change

The file splits and the class splits.

File decomposition. client.ts → ~15 files across client/, services/, features/, plus types.ts, constants.ts, errors.ts, interfaces.ts. Each 300–600 lines. Enables per-feature git blame, tree-shaking, and parallel PRs.

Feature modules. Each feature becomes a self-contained class (L3ModuleImpl, SwapModuleImpl, StealthModuleImpl, HistoryModuleImpl), changing the public API to namespaced access: wallet.stake()wallet.l3.stake(), wallet.getHistory()wallet.history.get(). Complex operations use a prepare/submit split:

class L3ModuleImpl implements L3Module {
prepareStake(balance, amount, treeSnapshot, arPK): PreparedStake { ... } // pure
private async submitStake(prepared, options): Promise<`0x${string}`> { ... } // side effects
async stake(amount, options?): Promise<Stake> {
const prepared = this.prepareStake(...);
const txHash = await this.submitStake(prepared, options);
return this.recordStake(prepared, txHash);
}
}

EBWalletBase shrinks to ~600 lines. prepareStake() is pure — unit-testable with zero mocks.

Open question: prepare/submit visibility

OptionImplication
A. PublicConsumers can pre-generate proofs, cache prepared TXs, build custom flows. More API surface to maintain.
B. Internal onlySimpler public API. Consumers call wallet.l3.stake() and it does everything.

Tradeoffs

More files to navigate (mitigated by barrel re-exports). Circular dependency risk. Import path churn. Cross-feature debugging spans files.


2. Needs product/leadership decision

Package/class naming (1/5)

Review note: Low confidence but high importance. This is a branding/strategy decision that depends on Shielded Pool plans. Wrong call means renaming everything again. Needs product/leadership input, not an engineering decision.

Currently SDK uses EB-specific naming convention. EBClient, EBClientConfig, EBError, etc. If the SDK is intended to cover both Encrypted Balances and Shielded Pool under one package, two separate naming questions arise.

A. Package & class names

Renaming classes and the package to protocol-neutral names:

Current (EB-only)Future (multi-protocol)
EBClientZkpClient
EBClientConfigZkpConfig
EBWallet / EBWalletBaseZkpWallet
EBErrorZkpError
@zkprivacy/eb-sdk@zkprivacy/sdk

B. Wallet API — wallet.eb.* namespace

Separately, EB core methods (transfer, shield, unshield, getBalance) currently live directly on wallet. If Shielded Pool arrives, both protocol layers expose similar verbs (transfer, deposit, withdraw) and need disambiguation:

wallet.eb.transfer(...)      // Encrypted Balance
wallet.eb.shield(...)
wallet.eb.unshield(...)
wallet.eb.getBalance(...)

wallet.l3.stake(...) // L3 (already namespaced)
wallet.swap.createOffer(...)
wallet.history.get(...)

wallet.sp.deposit(...) // Shielded Pool (future)
wallet.sp.withdraw(...)

This is a breaking API change if namespaced. Three options:

  1. Namespace now in v2 — Every feature is a namespace, no exceptions.
  2. Defer to v3 when Shielded Pool lands — v2 ships with wallet.transfer() as-is. A v3 break moves EB methods under wallet.eb.* alongside new wallet.sp.*.
  3. Keep EB top-level, never namespace itwallet.transfer() stays as-is permanently. L3, SP, swap, history are namespaced; EB is not. This positions EB as the primary/default protocol and everything else as advanced modes. Downside: inconsistent — "why is EB special?" — but reflects actual usage hierarchy where most consumers only use EB.

Both A and B are independent decisions. The architecture supports either: proposed architecture makes both the rename and the namespace addition mechanical, not structural.

Product questions

  • Is Shielded Pool planned to live in this SDK? If yes, when — is it close enough to justify renaming now?
  • Is a second breaking rename acceptable, or should we get it right the first time?

3. Design with current assumptions — medium confidence

API — consistent namespacing (3/5)

Features (l3, swap, stealth, history) are exposed as namespaced, nullable properties on the wallet:

wallet.l3       // L3Module | null
wallet.swap // SwapModule | null
wallet.stealth // StealthModule | null
wallet.history // HistoryModule (always present)

A module is null when NetworkConfig lacks the required sub-config (e.g., swap is null unless network.swap is configured; l3 is null unless network.l3 provides circuit sources).

Consumer pattern is idiomatic — one ?. or one if check:

await wallet.l3?.stake(100n);

if (wallet.swap) {
const { offerId } = await wallet.swap.createOffer({...});
}

Conditional types (WalletFor<C>) were evaluated. With single-network config they could narrow module types at compile time (e.g., config has network.swapwallet.swap is non-null), but nullable is simpler and forward-compatible. The one ?. or if check is negligible consumer cost.

Open question: bundle size and tree-shaking

Even with separate files, all feature modules are instantiated inside EBWallet and exposed as properties. The bundler can't tree-shake unused modules. The SDK is small enough today that this doesn't matter.

Two alternative approaches for when it does matter:

A. Nullable properties (current proposal) — modules instantiated inside wallet. Simple API, no tree-shaking.

const wallet = client.createWallet({ spendingKey, walletClient });
await wallet.swap?.createOffer({...});

B. Standalone service constructors — consumer imports and instantiates only what they need. Real tree-shaking.

import { createWebClient } from '@zkprivacy/eb-sdk/web';
import { SwapModule } from '@zkprivacy/eb-sdk/swap';

const client = await createWebClient({ network: NETWORKS.BASE });
const wallet = client.createWallet({ spendingKey, walletClient });
const swap = new SwapModule(wallet.context, { registry: '0xSwapRegistry...' });

The module constructor pattern (new SwapModule(sharedInfra, networkResources, config)) is chosen so that the architecture supports either approach. The question is whether to ship option B in v2 or defer.

Product question: How much do we care about bundle size? What are the deployment contexts and size budgets?


Multi-chain design (3/5)

Review note: Primary near-term value is supporting 2 deployments on the same chain for contract migrations. One client per network is the default model. Multi-network scenarios use multiple clients with shared backends via DI — no internal multi-network routing needed.

Problem

v1 binds one client to one chain: new EBClient({ chainId: 8453 }). N networks means N clients, each loading its own prover and solver — potentially N× the WASM initialization for chain-agnostic math.

Design: one client, one network

Each client is configured for a single network (deployment target). The factory config includes the network directly:

const client = await createWebClient({ network: NETWORKS.BASE });

createWallet no longer takes a network key — the client already knows its network:

const wallet = client.createWallet({ spendingKey, walletClient });

The client validates that walletClient.chain.id matches config.network.chainId and throws CONFIG_INVALID on mismatch.

Resource architecture

The client owns two tiers of resources. Heavyweight, chain-agnostic services are shareable across clients via DI. Network-specific resources are scoped to the client:

interface SharedInfra {
readonly decrypt: DecryptService;
readonly proof: ProofService;
readonly config: Readonly<ClientConfig>;
}

interface NetworkResources {
readonly chain: ChainReader;
readonly relayer?: RelayerClient;
readonly tree?: ITreeService;
readonly stakeStores: Map<string, IStakeStore>; // keyed by (contract, address)
}

SharedInfra is created at client construction. NetworkResources is created for the configured network — a viem PublicClient, optional tree/relayer instances. Prover and solver initialize once per client (or once total when shared via DI).

Wallet as lightweight session

EBKeyring already holds spendingKey + BPK with pure crypto methods. The wallet is this identity bound to the client's network resources — a session, not a long-lived object:

const wallet = client.createWallet({ spendingKey, walletClient });

The consumer passes walletClient (a viem WalletClient) — same as v1. This preserves the consumer's transport config, middleware, and signer (browser extension, hardware wallet, etc.). Internally, createWallet takes the wallet's identity, attaches the client's network resources, constructs feature modules, and returns a connected wallet.

Client lifecycle

The client is the long-lived object that owns all persistent resources. Wallets are ephemeral — when GC'd, only their in-memory caches (balance, history) are lost:

Client (app lifetime) — bound to one network
├── SharedInfra: DecryptService, ProofService
├── NetworkResources
│ ├── ChainReader
│ ├── ITreeService
│ ├── RelayerClient
│ └── stakeStores: Map<(contract, address) → IStakeStore>

Wallet (session lifetime) — lightweight, borrows from client
├── identity: spendingKey, BPK
├── walletClient (from consumer)
├── ref → NetworkResources
├── ref → SharedInfra
├── balance cache (ephemeral)
└── feature modules: L3Module, SwapModule, ...

In production, one client per app is the expected pattern for single-network use cases.

Multi-network: multiple clients, shared backends

When an app needs multiple networks, it creates multiple clients. The DI model lets heavy backends (solver, prover) be shared — WASM initializes once:

const solver = new DlogSolver();
const prover = new WasmProverBackend({ threads: 4 });
await solver.init();
await prover.init();

const baseClient = await createWebClient({ network: NETWORKS.BASE, solver, prover });
const arbClient = await createWebClient({ network: arbConfig, solver, prover });

const baseWallet = baseClient.createWallet({ spendingKey, walletClient });
const arbWallet = arbClient.createWallet({ spendingKey, walletClient: arbWalletClient });

baseWallet.BPK === arbWallet.BPK; // true — same identity, different networks

Prover and solver are chain-agnostic (they operate on math, not chain state), so sharing is safe. Network-specific resources (chain reader, tree service, relayer) are per-client and cannot be shared.

SDK-shipped network defaults

The SDK ships a NETWORKS registry with known deployments:

export const NETWORKS = {
BASE: {
chainId: 8453,
rpcUrl: 'https://mainnet.base.org',
contract: '0xDea0882f6026e7c4458fbdD67296D89FF849279b',
tokens: {
USD: '0x9f6d30758b85bd2f4b6107550756162e04ce1650',
EUR: '0x06f9706c8defcebd9cafe7c49444fc768e89d7a7',
PLN: '0xbdcdbe9a1ee3ce45b6eea8ec4d7cb07cd8444720',
},
swap: { registry: '0x667d6c4d1e69399a8b881b474100dccf73ce42a0' },
relayer: { url: 'https://eb-relayer.zkprivacy.dev' },
},
ANVIL: {
chainId: 31337,
rpcUrl: 'http://127.0.0.1:8545',
contract: '0x5FbDB2315678afecb367f032d93F642f64180aa3',
tokens: { USD: '0x5FbDB2315678afecb367f032d93F642f64180aa3' },
relayer: { url: 'http://127.0.0.1:3001' },
},
} as const;

This replaces DEPLOYMENTS + CURRENCY_TOKENS + RPC_URLS scattered across client.ts and @zkprivacy/eb-contracts. One canonical source, exported for consumer use.

When no network is passed, factories default to NETWORKS.BASE:

const client = await createMobileClient();
const wallet = client.createWallet({ spendingKey, walletClient });
// → Base mainnet, native prover/solver

Contract migration: two clients, same chain

A "network" is a deployment target — a contract address + circuit version on a chain. Two deployments on the same chain = two clients. This naturally handles custom networks, testnets, and contract migration:

const solver = new DlogSolver();
const prover = new WasmProverBackend();
await solver.init();
await prover.init();

const currentClient = await createWebClient({
network: {
...NETWORKS.BASE,
l3: { stakeCircuit: '/eb_stake.json', unstakeCircuit: '/eb_unstake.json' },
},
solver, prover,
});

const legacyClient = await createWebClient({
network: {
...NETWORKS.BASE, // same chainId, same RPC
contract: '0xOldContract...',
circuits: { // older circuit version
transfer: '/v1/eb_transfer.json',
unshield: '/v1/eb_unshield.json',
},
},
solver, prover, // share backends — no duplicate WASM init
});

// Migration flow — same identity, same chain, different contracts
const wallet = currentClient.createWallet({ spendingKey, walletClient });
const legacy = legacyClient.createWallet({ spendingKey, walletClient });

const balance = await legacy.getBalance();
await legacy.unshield(balance, myAddress); // exit old contract
await wallet.shield({ amount: balance }); // enter new contract

No SDK PR required to add networks — the NetworkConfig interface (defined in config shape section above) is the contract. Consumers spread a preset and override what they need.

ProofService loads and caches by circuit type — when backends are shared across clients, multiple circuit versions coexist in cache. The IProverBackend is circuit-agnostic (it takes bytes + witness), so version differences are a NetworkConfig + ProofService concern, not a prover concern.

Tradeoffs

Simpler mental model: one client = one network. No internal routing, no per-network resource maps, no network key abstraction. Multi-network requires consumer-managed client instances — more boilerplate than a single multi-network client, but explicit and predictable. Backend sharing via DI avoids duplicate WASM initialization. The walletClient.chain.id validation is a runtime check, not compile-time.


4. Needs design spike or product input — low confidence

Review note: All sections below are starting points, not shippable designs. They can ship with internal/unstable APIs in v2 and stabilize in a v2.x minor after iteration informed by real usage.

Error design (2/5)

Review note: Taxonomy is a starting point. Detail map stability, whether hasCode() opt-in typing adds real value vs complexity, and the exact error codes all need iteration — ideally informed by the actual ~110 throw sites and real consumer error-handling patterns.

Problem

All ~110 throw new Error('message') sites use string matching for error handling. No structure, no types, no cross-platform safety (RN bridge loses prototype chains).

Proposed direction

Two-level Stripe model: broad type for UI error boundaries, specific code for programmatic handling.

Taxonomy. A single const defines the type→code relationship. Both unions are derived from it — one source of truth, no drift:

const EB_ERROR_TAXONOMY = {
account_error: ['insufficient_balance', 'not_registered', 'decrypt_failed'],
tx_error: ['tx_reverted', 'tx_timeout'],
proof_error: ['proof_failed', 'solver_init_failed', 'prover_init_failed'],
relayer_error: ['relayer_rejected', 'relayer_timeout'],
config_error: ['config_invalid', 'feature_not_enabled'],
} as const;

type EBErrorType = keyof typeof EB_ERROR_TAXONOMY;
type EBErrorCode = typeof EB_ERROR_TAXONOMY[EBErrorType][number];

Error class. Single class, no hierarchy. _tag brand field enables shape-based guard for RN bridge. cause follows ES2022 ErrorOptions:

class EBError extends Error {
readonly _tag = 'EBError' as const;

constructor(
public readonly type: EBErrorType,
public readonly code: EBErrorCode,
message: string,
public readonly details: Record<string, unknown> = {},
options?: { cause?: unknown },
) {
super(message, options);
}

toJSON() {
return {
_tag: this._tag, type: this.type, code: this.code,
message: this.message, details: this.details,
cause: this.cause instanceof Error
? { name: this.cause.name, message: this.cause.message }
: this.cause,
};
}

static fromJSON(json: ReturnType<EBError['toJSON']>): EBError {
return new EBError(json.type, json.code, json.message, json.details);
}

static create(code: EBErrorCode, details?: Record<string, unknown>, options?: { cause?: unknown }): EBError {
const type = Object.entries(EB_ERROR_TAXONOMY)
.find(([, codes]) => (codes as readonly string[]).includes(code))![0] as EBErrorType;
return new EBError(type, code, defaultMessage(code, details), details ?? {}, options);
}
}

EBError.create(code, details) is the primary throw-site API — type is derived, message from a per-code template:

throw EBError.create('insufficient_balance', { have: 100n, need: 500n });
// → type: 'account_error', message: 'Insufficient balance: have 100, need 500'

throw EBError.create('proof_failed', { circuit: 'eb_transfer' }, { cause: noirError });

Shape-based guard — works across RN bridge where prototype chains are lost:

function isEBError(e: unknown): e is EBError {
return typeof e === 'object' && e !== null && (e as any)._tag === 'EBError';
}

Consumer code — two levels:

// Level 1: broad error boundary (5 types)
if (isEBError(e)) {
switch (e.type) {
case 'account_error': showBalanceError(e.message); break;
case 'proof_error': showRetryPrompt(); break;
}
}

// Level 2: specific handling (check code)
if (isEBError(e) && hasCode(e, 'insufficient_balance')) {
setFieldError('amount', `Need ${e.details.need}, have ${e.details.have}`);
}

Opt-in detail typing via hasCode() guard:

interface EBErrorDetailMap {
insufficient_balance: { have: bigint; need: bigint };
not_registered: { address: string };
tx_reverted: { txHash: string };
tree_desync: { localIndex: number; chainIndex: number };
proof_failed: { circuit: string };
// ...
}

function hasCode<C extends keyof EBErrorDetailMap>(
e: EBError, code: C,
): e is EBError & { code: C; details: EBErrorDetailMap[C] } {
return e.code === code;
}

Upstream error wrapping policy:

  1. Wrap at service boundaries. ProofService, DecryptService, ChainReader, RelayerClient catch upstream errors (viem, NoirJS, bb.js) and rethrow as EBError with { cause }.
  2. Never wrap twice. If an EBError is caught, rethrow as-is.
  3. Preserve originals. Upstream error accessible via e.cause.

Open questions

QuestionOptions
Detail map stabilityA) Detail shapes are informational only (not semver-stable). B) Promote specific detail shapes to stable API.
**hasCode() value**Does opt-in detail typing add real value vs complexity? Should consumers just use e.details with their own assertions?
Exact taxonomyCurrent 5 types / ~12 codes are a starting point. Need to audit the actual ~110 throw sites to validate.

Tradeoffs

~110 sites to update. hasCode() detail narrowing is opt-in complexity. The detail map is a convenience, not a contract — but consumers may treat it as one.


TreeState interface (2/5)

Review note: Highest-risk refactor in the proposal. The interface shape (6 methods, per-contract scoping, snapshot vs mutation split) needs validation against actual tree usage patterns. Concurrency model (async mutex per contract) is proposed but not proven.

Problem

In v1, EBWallet.stake() reaches into EBClient via ~15 @internal methods — an undocumented private API. Tree state is shared across wallets but mutated with no concurrency control.

Proposed direction

v2 has two tiers of shared state:

**SharedInfra** — created once per client, used by all wallets:

ResourceMutated by
DecryptServicelazy init only
ProofServicelazy init only

**NetworkResources** — created once per client (one network), shared by all wallets on that client:

ResourceMutated by
ChainReaderimmutable
RelayerClientimmutable
ITreeServicestake(), syncTree() — concurrency controlled internally
stakeStoresstake(), unstake()

Each network gets its own ITreeService instance. The interface is scoped to a single network — snapshot(contract) operates on that network's tree state:

interface ITreeService {
snapshot(contract: `0x${string}`): TreeSnapshot;
applyInsert(contract: `0x${string}`, commitment: bigint): InsertResult;
sync(contract: `0x${string}`): Promise<void>;
ensureSynced(contract: `0x${string}`): Promise<boolean>;
checkIntegrity(contract: `0x${string}`): Promise<{ synced: boolean; localIndex: number; chainIndex: number }>;
getState(contract: `0x${string}`): Promise<L3State>;
}
CategoryOperationsRule
Readssnapshot(), checkIntegrity(), getState()Free concurrent access. snapshot() returns immutable copy.
MutationsapplyInsert(), sync()Serialized per contract via async mutex.
CompositeensureSynced()Acquires mutation lock internally.

The SDK guarantees local concurrency safety. Cross-process/cross-user conflicts are handled on-chain (stale roots rejected; consumer catches TREE_DESYNC and retries).

Per-wallet state (ephemeral, GC'd with wallet):

StateOwner
Balance cacheEBWalletBase
History cacheHistoryModuleImpl
StealthManagerStealthModuleImpl

Tradeoffs

SharedInfra/NetworkResources prop-drilled through multiple levels. ITreeService (~6 methods) is the highest-risk refactor — interface shape needs validation against real tree usage patterns before locking in.


Telemetry (2/5)

Review note: Needs more work before final design. Open questions: default on vs opt-in, granularity model, privacy/legal review, whether the ingestion service exists yet. Captures intent, not a shippable design.

Proposed direction

The SDK collects structured performance events and sends them to zkprivacy's telemetry service for monitoring SDK health, optimizing prover/solver performance, and diagnosing issues.

interface TelemetryEvent {
type: 'solver_init' | 'solver_solve' | 'prover_prove' | 'prover_prewarm'
| 'tree_sync' | 'relayer_submit' | 'tx_submit' | 'tx_confirm' | ...;
durationMs: number;
metadata: Record<string, unknown>;
}

Each event has typed metadata (backend name, circuit, proof size, solver tier, thread count, network key). getDeviceInfo() collects platform, cores, memory, COI support, and SDK version at init time.

Default behavior (proposed, not decided):

  • SDK sends events to https://telemetry.zkprivacy.dev (or configured endpoint) via navigator.sendBeacon / fetch with fire-and-forget semantics. No retries, no queueing — if the request fails, the event is lost. Zero impact on SDK operation.
  • Events are batched (e.g., flush every 10s or on idle).

Opt-out:

const client = await createWebClient({
telemetry: false, // disables all telemetry — nothing sent, nothing collected
});

Consumer callback:

const client = await createWebClient({
onTelemetry: (event) => {
myAnalytics.track(event); // consumer receives same events SDK sends internally
},
});

Privacy constraints. Events never contain wallet addresses, BPKs, spending keys, TX hashes, balances, or amounts. Only operational metrics: backend type, circuit name, duration, thread count, device class. Timestamps are coarsened to 1-second resolution.

Open questions

QuestionOptions
Default on vs opt-inOn-by-default gives better coverage but some enterprise/privacy-conscious consumers will object. Opt-in reduces coverage but avoids controversy.
GranularityA) Per service call. B) Per user op. C) Both with parentOp correlation.
Legal reviewEven with no PII, on-by-default data collection may require GDPR disclosure. Has legal reviewed?
Ingestion serviceDoes the telemetry endpoint exist? What's the availability requirement?

Tradeoffs

~20 instrumentation sites. Telemetry endpoint adds an external dependency (must be highly available or fail silently). Event schema is an implicit API — changes need backward compatibility with the ingestion service.


AbortSignal support (2/5)

Review note: Useful but adds complexity across ~15-20 methods. WASM can't truly cancel. Needs more thought on which operations actually benefit.

Proposed direction

Long-running ops (proof gen 2–10s, solver init 1–5s, relayer polling up to 60s) are uncancellable in v1. All would accept optional AbortSignal:

await wallet.transfer({ to, amount }, { signal });
await wallet.l3.stake(100n, { signal });

WASM prove() isn't interruptible — result discarded on abort. Relayer polling cancels immediately (TX may still confirm on-chain). Chain reads cancel via viem's signal support.

Tradeoffs

~15–20 methods need signal threading. WASM cancellation has latency (up to ~5s). Post-TX-submission abort semantics are ambiguous.


CDN asset integrity (2/5)

Review note: Important for production long-term. Not urgent for v2 launch but should be on the roadmap. Operational cost (manifest regen per CDN update) needs a plan.

Problem

v1 fetches WASM solver binaries, tier tables, and circuit JSON from cdn.zkprivacy.dev via plain fetch with no integrity checks. A CDN compromise or MITM could serve malicious WASM that extracts spending keys or produces proofs leaking private inputs.

Proposed direction

v2 factories verify all downloaded assets against known SHA-256 hashes before use. The npm package ships a version-locked manifest of expected digests; createWebClient() checks every fetched asset and throws ASSET_INTEGRITY_FAILED on mismatch. JS glue code (dlog-solver.js, < 25 KB) moves into the npm package so the entire CDN surface is non-executable data — WASM instantiated only after verification, tier tables and circuit JSON parsed only after verification.

async function fetchVerified(url: string, expected: { sha256: string; sizeBytes: number }): Promise<ArrayBuffer> {
const res = await fetch(url);
const buf = await res.arrayBuffer();
if (buf.byteLength !== expected.sizeBytes)
throw EBError.create('asset_integrity_failed', { url });
const hex = [...new Uint8Array(await crypto.subtle.digest('SHA-256', buf))]
.map(b => b.toString(16).padStart(2, '0')).join('');
if (hex !== expected.sha256)
throw EBError.create('asset_integrity_failed', { url });
return buf;
}

Consumers with self-hosted CDNs provide their own manifest via config.assetManifest.

Tradeoffs

SHA-256 on large tier tables (~50 MB) adds ~100 ms — negligible vs. download time. Manifest must be regenerated on every CDN asset update. iOS frameworks rely on CocoaPods checksums and code signing rather than this mechanism.


5. Deprioritize

client.dispose() (1/5)

Review note: Optional, advanced-use-only. Not a core lifecycle concern. Freeing WASM/solver resources is a nice-to-have, not something most consumers will call. Wallet object cleanup is the consumer's responsibility. The architecture shouldn't be designed around it.

The proposal originally positioned client.dispose() as the central lifecycle hook. At 1/5 confidence, this changes:

  • Wallets are already ephemeral (GC'd, no cleanup needed) — this stays.
  • Client doesn't need a formal teardown. If a consumer wants to release WASM memory in a long-running app, dispose() can exist as an optional escape hatch, but the architecture shouldn't be designed around it.
  • In practice, the client lives for the app's lifetime and the OS reclaims everything on exit.

No need to track disposable references, no isDisposed guards on every method, no cleanup ordering concerns.