Core Concepts

Performance

rs-x performance is dependent on a few factors:

  • Expression complexity: this can affect parsing and evaluation times, Although both can be mitigated by preparsing and compiling expressions at build time. This is done by default. Most of the time expressions are just identifiers, so the complexity is minimal. Expression are also cached,so if you parse expression at runtime you only pay the cost once per unique expression string. So for table with 10,000 rows and 20 unique column expressions, rs-x only parses 20 expressions.
  • Number of unique (model, field) pairs you bind to: for every unique (model, field) pair RS-X wil create a watch. This process is optimized by sharing watchers between expression. But still you can have a lot of watchers if you are not careful. For example, for a table with 1000 rows and 10 columns, 10000 watchers need to be created. Although rs-x can still deal with a large number of bindings it can affect the initial loading time. It doesnt affect the performance once the initial loading is done. See Demo
  • Update frequency: as the number of changes increases, the cost of updates typically grows. In RS-X, this is less of a concern. Expression evaluation is efficient, and only the expressions that depend on the changed data are re-evaluated and emit change events. As a result, updates remain localized, even when changes occur frequently.

All benchmarks on this page and its sub-pages were measured on Apple M4, 16.0 GB RAM, Node.js v25.4.0.

What changed in v2

RS-X v2 uses a faster parser and the expression engine has been optimized. The headline numbers: parsing is up to 87% faster, live updates are 60–70% faster for typical expressions, and memory at scale is roughly halved. Binding cost appears higher than v1, but this is largely a cost shift: v2 resolves all expression dependencies and builds the full watch graph once at bind time, so every subsequent evaluation is a direct function call — no AST traversal. v1 deferred that dependency resolution to each evaluation, keeping bind cheap but making every update more expensive.

The full v1 vs v2 comparison — with numbers for every metric — is on the v1 vs v2 comparison page.

Areav2 vs v1Notes
ParsingUp to 87% fasterSingle-identifier expressions benefit most
Binding (upfront)~10–30% slowerFull watch graph built once at bind; saves cost on every update
Updates (single field)Up to 70% fasterCalls compiled function instead of walking AST
Updates (bulk)Up to 60% fasterV8 JIT optimises the compiled function
Memory~50% lessCompiled plans are shared across all bindings

Two engine modes: compiled and tree

rs-x can evaluate expressions in two modes. Tree mode walks the parsed AST on every update — straightforward, no upfront compilation cost, but evaluation time grows with expression complexity. Compiled mode uses the AOT compiler (rs-x-compiler) to generate a native JavaScript function for each expression at build time. At runtime, rs-x looks up the pre-generated function and calls it directly — no runtime compilation. V8 JIT-optimises these as regular JS function calls.

You select the mode per call site with the compiled option:

rsx('price * quantity', { compiled: true })(model)
rsx('price * quantity', { compiled: false })(model)  // tree mode

The break-even depends on expression complexity. For simple single-identifier expressions both modes are within a few percent of each other. For complex expressions with 15+ AST nodes, compiled mode is consistently faster for both binding and updates.

Compiled vs tree: the break-even point →

Parsing: once per unique expression

Every expression string goes through the parser exactly once. The parsed AST is stored in a cache keyed by the expression string. Every subsequent binding that uses the same string clones the cached AST — a clone is far cheaper than a full parse.

In a table with 10,000 rows and 20 unique column expressions, rs-x parses 20 times and clones 199,980 times. Parse cost is therefore a cold-start concern — first load, SSR, initial hydration — not a steady-state concern.

The preparse option lets you pre-populate the cache at startup so the first binding is as fast as every subsequent one:

rsx('price * quantity', { preparse: true })  // parses at import time

The lazy option defers loading the AOT-compiled plan module until the expression is first used. Only a lightweight manifest of expression strings is loaded at startup; the compiled plan itself is imported on demand. This is useful for large applications where many expressions are registered but only a subset are needed on any given page:

rsx('price * quantity', { lazy: true })  // preparse deferred until first bind

Parse performance data and charts →

One watcher per model field

When an expression binds to a model, rs-x registers a watcher for each field the expression reads. If two expressions both read price on the same model object, they share one watcher — rs-x does not create a second one. The watcher is reference-counted and released when the last expression that uses it is disposed.

Watcher sharing helps when multiple expressions observe the same field on the same model instance — for example, two components both showing price from the same object create only one watcher between them. In a typical table where each row is its own model, sharing does not apply across rows: a 1,000-row × 10-column table creates 10,000 watchers (one per unique model–field pair). A field change notifies exactly the expressions that depend on that field — nothing more.

Even without sharing, watcher cost scales predictably. The identifier-only benchmark shows bind and update performance at scale for the common table pattern.

Identifier-only binding performance →

Memory and disposal

rs-x uses a reference-counted binding graph. Every binding holds a reference to its expression and its watchers. When you call .dispose(), rs-x walks the graph in one pass and decrements all reference counts. Watchers whose count reaches zero are released. No manual teardown is needed beyond the single dispose call.

Memory usage scales predictably with binding count. In compiled mode, all bindings of the same expression share a single compiled plan — so the plan cost is paid once regardless of how many bindings exist. In tree mode, each binding holds its own copy of the expression tree.

For generated expressions where each binding has a unique expression string, compiled mode uses significantly less memory: at 1,000 same-model generated expressions, compiled mode uses 515 MB vs 1,500 MB in tree mode.

Memory usage and disposal benchmarks →