Core Concepts

Performance report

v2.0.0 rewrites the parser and the expression engine. Parsing is up to 87% faster. Live updates are 60–70% faster. Memory at scale is roughly halved. One tradeoff: calling rsx(expr)(model) many times in a tight loop costs more than it did in v1 — because v2 does more work upfront per binding. This page explains why, and shows when it matters.

Parsing: up to 87% faster

Every expression string goes through the parser once. The result is cached, and every subsequent binding that uses the same string just clones the cached AST. So parse speed is mainly a cold-start cost — first page load, server-side render, initial hydration.

In v1.0.0, parsing a single identifier like count took 5.5 µs. In v2.0.0 it takes 0.7 µs. Small expressions gain the most because the old parser had a fixed overhead that dominated the tiny amount of actual work. Larger expressions still improve: a 63-node formula drops from 44 µs to 36 µs, 20% faster. Both engine modes share the same parser, so these numbers apply equally to compiled and tree.

Expression shape
1v03.6530.731,368,847
3v0 + v19.3311.87535,839
7v0 + v1 + v2 + v320.4754.09244,205
15v0 + ... + v742.4328.49117,836
31v0 + ... + v1587.64117.5357,051
63v0 + ... + v31178.09035.6228,076

After the first parse, every binding that uses the same expression string clones the cached result. In tree mode, clone cost grows with expression size — more nodes to copy. In compiled mode, clone cost is nearly flat regardless of size, because the compiled function is shared and only lightweight binding metadata gets duplicated.

15.6111.122.7170.54
312.0152.407.9351.59
723.5354.7117.8983.58
1547.4999.5037.9347.59
3195.96519.1978.64415.73
63191.70938.34161.55432.31

Binding: more work upfront, faster from that point on

Binding connects an expression to a model and starts watching for changes. It happens when you call rsx(expression)(model). This is where v2.0.0 shows its main tradeoff.

Take a data table with 1,000 rows, each bound to the same expression a + b on its own row model. v1.0.0 set that up in about 25 ms. v2.0.0 takes 43–46 ms. An extra 18 ms.

The reason: v1.0.0 deferred per-model setup work (watcher registration, evaluate-manager initialisation) to the first time each binding was read. Binding 1,000 rows was 1,000 cheap AST clones and little else. v2.0.0 does that setup synchronously at bind time — so 1,000 rows means 1,000 full setups immediately at bind time. Each binding arrives fully initialised and every subsequent update can start immediately without lazy-init checks.

The payback is fast. Bulk update at 1,000 bindings drops from 7.9 ms in v1.0.0 to 2.4–2.9 ms — saving 5–5.5 ms on every update cycle. The 18 ms extra is recovered after 3–4 updates. Any live table that re-renders when data changes will cross that threshold within seconds.

At 10,000 rows, the picture changes further. Compiled mode bind same takes 441 ms vs v1.0.0's 638 ms — the shared compiled plan amortises per-binding setup cost at scale, and bind same is now 31% faster than v1.0.0. Tree mode at 10,000 is still slower to bind, but its bulk-update saving (73 ms vs 146 ms) recovers that cost within two update cycles.

1,00038.35043.373
3,000143.833142.731
5,000260.666298.487
10,000737.067884.867

These are tree mode numbers. Compiled mode bind same at 1,000 is slightly higher (46 ms); at 10,000 it reverses to 441 ms. Full compiled vs tree comparison is in the reference section below.

Updates: the 60–70% improvement that matters most

After binding, updates are where your application spends most of its time. A field changes, rs-x notifies only the expressions that read that field, and subscribers receive new values. Two scenarios to understand:

One field on one row. The cost is O(1) — it does not grow with how many total bindings are active. The true per-update cost is around 0.5 µs at every binding count. The numbers in the table below vary because performance.now() is unreliable for sub-millisecond single-shot measurements at this scale; the actual work is constant and near-instantaneous.

One field changes on every row (bulk update). This is the core metric for live data tables. At 1,000 bindings, v1.0.0 needed 7.9 ms; v2.0.0 needs 2.4–2.9 ms. At 10,000 bindings: 146 ms down to 61–73 ms. Two changes drive this: shared watchers (one watcher per model field instead of one per expression, so a single field change propagates through one path instead of fanning out to duplicates) and inline binary evaluation (recalculating a + b reads both operands directly with no intermediate array allocation per recalculation).

BindingsTree bulk update (ms)Compiled bulk update (ms)
1,0002.3882.873
3,00013.04818.277
5,00021.26328.310
10,00072.80961.112

Cleanup: O(N) and predictable

When you dispose a set of bindings, rs-x walks the binding graph in one pass and releases every watcher and subscriber. The underlying storage uses a Map, so each removal is O(1) and the total cost for N bindings is O(N). No manual teardown is needed in application code.

Per-binding dispose cost holds at 0.03–0.04 ms across all measured scales. Disposing 10,000 bindings takes around 300–400 ms in the worst case — the worst case being the allocation-heavy unique-expression scenario where every binding has its own independent watcher. In a real table where many rows watch the same field, the shared watcher is released once regardless of how many bindings used it, so dispose is cheaper still.

BindingsDispose median (ms)ms / binding
1,00013.770.0138
2,00063.810.0319
3,000103.580.0345
4,000147.080.0368

Putting the numbers in perspective

The benchmarks deliberately go to extremes — thousands of bindings all triggered at once. Real applications look very different. A typical form or detail page has 20–100 bindings. A data table with 100 rows and 5 bound columns has 500 bindings. At those scales, bind cost is in single-digit milliseconds and update cost is sub-millisecond — both comfortably inside the 100 ms threshold where users start to notice lag.

The bind-same regression scales proportionally with row count. The extra ~18 ms is for 1,000 rows. For 100 rows it is around 2 ms. For a 50-row table it is under 1 ms. If your tables are paginated or virtualised, you are never binding thousands of rows simultaneously — the regression does not apply at all.

The one scenario where v1.0.0has an edge: a large table that renders once and never receives data updates. There, v1's deferred setup was genuinely cheaper and v2.0.0 offers no advantage. Any table whose data can change crosses the payback threshold quickly and runs faster on every subsequent update.

rs-x tracks dependencies through plain model assignment:

model.price = 42;

No subscriptions to manage, no lifecycle hooks, no explicit invalidation. Async values — Observable, Promise, BehaviorSubject — resolve transparently into expression values. The numbers above measure the runtime cost of all of that working automatically.

Machine and run conditions

Machine: Apple M4, 16.0 GB RAM, darwin/arm64, OS 24.6.0, Node v25.4.0.

Parse scenarios run 5,000 operations per sample and do not bind expressions to models.

Run flags: node --expose-gc --max-old-space-size=4096. --expose-gc lets the benchmark force a full GC between samples so heap measurements reflect only the scenario under test.

Compared snapshots: reports/rsx-core-concepts-performance/benchmark-2026-03-14.json reports/rsx-core-concepts-performance/benchmark-2026-03-31.json.

v1.0.0 vs v2.0.0 — all comparison points

Parse

Metricv1.0.0v2.0.0Change
Parse 1 nodes5.482 us/op0.731 us/op86.7% faster
Parse 3 nodes6.993 us/op1.866 us/op73.3% faster
Parse 7 nodes10.524 us/op4.095 us/op61.1% faster
Parse 15 nodes17.710 us/op8.486 us/op52.1% faster
Parse 31 nodes25.173 us/op17.528 us/op30.4% faster
Parse 63 nodes44.295 us/op35.618 us/op19.6% faster
Parse+clone 63 nodes80.986 us/op38.342 us/op52.7% faster

Bind & update

Metricv1.0.0v2.0.0 (tree)v2.0.0 (compiled)Change (tree)Change (compiled)
Bind unique 1,00035.092 ms38.350 ms32.317 ms9.3% slower7.9% faster
Bind same 1,00025.444 ms43.373 ms45.661 ms70.5% slower79.5% slower
Bind unique 10,000521.444 ms737.067 ms561.750 ms41.4% slower7.7% slower
Bind same 10,000638.054 ms884.867 ms440.759 ms38.7% slower30.9% faster
Single update 1,0000.089 ms0.009 ms0.008 ms90.4% faster90.7% faster
Bulk update 1,0007.904 ms2.388 ms2.873 ms69.8% faster63.7% faster
Single update 10,0000.107 ms0.002 ms0.002 ms98.1% faster97.8% faster
Bulk update 10,000146.234 ms72.809 ms61.112 ms50.2% faster58.2% faster

Parse performance — full chart

Expression shape
1v03.6530.731,368,847
3v0 + v19.3311.87535,839
7v0 + v1 + v2 + v320.4754.09244,205
15v0 + ... + v742.4328.49117,836
31v0 + ... + v1587.64117.5357,051
63v0 + ... + v31178.09035.6228,076

Parse cache behavior — full chart

15.6111.122.7170.54
312.0152.407.9351.59
723.5354.7117.8983.58
1547.4999.5037.9347.59
3195.96519.1978.64415.73
63191.70938.34161.55432.31

Same-model generated expressions (compiled vs tree)

This scenario binds 1,000 generated expressions to the same model object. Each expression is a long arithmetic chain — roughly 60–120+ nodes, repeatedly using x and y. Representative shape: (((x + y) + ((x + y) + n) - a) * b) / ((x + y) + c) repeated many times in one expression string. This is the scenario where compiled mode has its largest advantage.

BindingsMetricCompiled (ms)Tree (ms)Compiled vs tree
1,000Bind11.884358.38796.7% faster
1,000Dispose1.94524.20392.0% faster
1,000Single update5.56049.73488.8% faster
1,000Bulk update34.887393.43891.1% faster

Heap usage (compiled vs tree)

ScenarioBindingsMetricCompiled (MB)Tree (MB)
Sync identifier1,000bind74.076.0
single update51.047.1
bulk update55.251.2
Async identifier1,000bind164.7170.0
single update149.3150.4
bulk update152.3153.3
Same-model generated expressions1,000bind515.31499.7
dispose515.31499.7
single update512.31495.7
bulk update533.11527.3

Peak RSS (compiled vs tree)

ScenarioBindingsMetricCompiled (MB)Tree (MB)
Sync identifier1,000bind222.5218.3
single update222.7218.2
bulk update223.4218.8
Async identifier1,000bind719.6729.6
single update719.6729.7
bulk update719.6729.8
Same-model generated expressions1,000bind1103.71734.9
dispose1103.71734.9
single update1104.21741.1
bulk update1104.61759.4

Identifier-only binding (most common real-world pattern)

The most common binding in a real app is a single identifier — row.status, user.name — bound to its own model object. Each expression is a single-node tree: no operators, no member chains. This is the simplest and cheapest case.

BindingsBind (ms)Bind+initialize (ms)Single update (ms)Bulk update (ms)µs / binding
1002.256371.4060.00826.46422.56
50070.17495.4320.0084.658140.35
1,00066.185111.7980.0023.01066.19
3,000161.463234.2920.00337.48553.82
5,000275.182309.9530.00224.94255.04
10,000758.932902.2500.00255.08275.89

Shared-identifier binding scenario

In practice, many expressions across many rows read the same small set of model fields. This scenario binds N expressions to one model while reusing only 10 identifiers across all bindings. It validates shared-watch behaviour: regardless of how many bindings are added, only 10 watcher subscriptions are ever created.

At 10,000 bindings this shared scenario is about 1.0x faster than the unique-identifier stress case.

ExpressionsShared identifierswatchState callsNew subscriptionsTotal bind time (ms)ms per expression
1,000101010170.320.17032
3,000101010222.280.07409
5,000101010296.690.05934
10,000101010725.840.07258

Memory usage

Scenario
Parse (3 nodes)18.2109.5
Parse (7 nodes)18.7110.0
Parse (15 nodes)20.6110.9
Parse (1 nodes)20.7104.5
Parse (31 nodes)21.0129.0
Parse (63 nodes)25.1133.2