What changed in v2
RS-X v2 uses a faster parser and the expression engine has been optimized. The headline numbers: parsing is up to 87% faster, live updates are 60–70% faster for typical expressions, and memory at scale is roughly halved. Binding cost appears higher than v1, but this is largely a cost shift: v2 resolves all expression dependencies and builds the full watch graph once at bind time, so every subsequent evaluation is a direct function call — no AST traversal. v1 deferred that dependency resolution to each evaluation, keeping bind cheap but making every update more expensive.
The full v1 vs v2 comparison — with numbers for every metric — is on the v1 vs v2 comparison page.
| Area | v2 vs v1 | Notes |
|---|
| Parsing | Up to 87% faster | Single-identifier expressions benefit most |
| Binding (upfront) | ~10–30% slower | Full watch graph built once at bind; saves cost on every update |
| Updates (single field) | Up to 70% faster | Calls compiled function instead of walking AST |
| Updates (bulk) | Up to 60% faster | V8 JIT optimises the compiled function |
| Memory | ~50% less | Compiled plans are shared across all bindings |
Two engine modes: compiled and tree
rs-x can evaluate expressions in two modes. Tree mode walks the parsed AST on every update — straightforward, no upfront compilation cost, but evaluation time grows with expression complexity. Compiled mode uses the AOT compiler (rs-x-compiler) to generate a native JavaScript function for each expression at build time. At runtime, rs-x looks up the pre-generated function and calls it directly — no runtime compilation. V8 JIT-optimises these as regular JS function calls.
You select the mode per call site with the compiled option:
rsx('price * quantity', { compiled: true })(model)
rsx('price * quantity', { compiled: false })(model) // tree modeThe break-even depends on expression complexity. For simple single-identifier expressions both modes are within a few percent of each other. For complex expressions with 15+ AST nodes, compiled mode is consistently faster for both binding and updates.
Compiled vs tree: the break-even point →
Parsing: once per unique expression
Every expression string goes through the parser exactly once. The parsed AST is stored in a cache keyed by the expression string. Every subsequent binding that uses the same string clones the cached AST — a clone is far cheaper than a full parse.
In a table with 10,000 rows and 20 unique column expressions, rs-x parses 20 times and clones 199,980 times. Parse cost is therefore a cold-start concern — first load, SSR, initial hydration — not a steady-state concern.
The preparse option lets you pre-populate the cache at startup so the first binding is as fast as every subsequent one:
rsx('price * quantity', { preparse: true }) // parses at import timeThe lazy option defers loading the AOT-compiled plan module until the expression is first used. Only a lightweight manifest of expression strings is loaded at startup; the compiled plan itself is imported on demand. This is useful for large applications where many expressions are registered but only a subset are needed on any given page:
rsx('price * quantity', { lazy: true }) // preparse deferred until first bindParse performance data and charts →
One watcher per model field
When an expression binds to a model, rs-x registers a watcher for each field the expression reads. If two expressions both read price on the same model object, they share one watcher — rs-x does not create a second one. The watcher is reference-counted and released when the last expression that uses it is disposed.
Watcher sharing helps when multiple expressions observe the same field on the same model instance — for example, two components both showing price from the same object create only one watcher between them. In a typical table where each row is its own model, sharing does not apply across rows: a 1,000-row × 10-column table creates 10,000 watchers (one per unique model–field pair). A field change notifies exactly the expressions that depend on that field — nothing more.
Even without sharing, watcher cost scales predictably. The identifier-only benchmark shows bind and update performance at scale for the common table pattern.
Identifier-only binding performance →
Memory and disposal
rs-x uses a reference-counted binding graph. Every binding holds a reference to its expression and its watchers. When you call .dispose(), rs-x walks the graph in one pass and decrements all reference counts. Watchers whose count reaches zero are released. No manual teardown is needed beyond the single dispose call.
Memory usage scales predictably with binding count. In compiled mode, all bindings of the same expression share a single compiled plan — so the plan cost is paid once regardless of how many bindings exist. In tree mode, each binding holds its own copy of the expression tree.
For generated expressions where each binding has a unique expression string, compiled mode uses significantly less memory: at 1,000 same-model generated expressions, compiled mode uses 515 MB vs 1,500 MB in tree mode.
Memory usage and disposal benchmarks →
Detailed benchmarks