v2.0.0 rewrites the parser and the expression engine. Parsing is up to 87% faster. Live updates are 60–70% faster. Memory at scale is roughly halved. One tradeoff: calling rsx(expr)(model) many times in a tight loop costs more than it did in v1 — because v2 does more work upfront per binding. This page explains why, and shows when it matters.
Parsing: up to 87% faster
Every expression string goes through the parser once. The result is cached, and every subsequent binding that uses the same string just clones the cached AST. So parse speed is mainly a cold-start cost — first page load, server-side render, initial hydration.
In v1.0.0, parsing a single identifier like count took 5.5 µs. In v2.0.0 it takes 0.7 µs. Small expressions gain the most because the old parser had a fixed overhead that dominated the tiny amount of actual work. Larger expressions still improve: a 63-node formula drops from 44 µs to 36 µs, 20% faster. Both engine modes share the same parser, so these numbers apply equally to compiled and tree.
Expression shape
1
v0
3.653
0.73
1,368,847
3
v0 + v1
9.331
1.87
535,839
7
v0 + v1 + v2 + v3
20.475
4.09
244,205
15
v0 + ... + v7
42.432
8.49
117,836
31
v0 + ... + v15
87.641
17.53
57,051
63
v0 + ... + v31
178.090
35.62
28,076
After the first parse, every binding that uses the same expression string clones the cached result. In tree mode, clone cost grows with expression size — more nodes to copy. In compiled mode, clone cost is nearly flat regardless of size, because the compiled function is shared and only lightweight binding metadata gets duplicated.
1
5.611
1.12
2.717
0.54
3
12.015
2.40
7.935
1.59
7
23.535
4.71
17.898
3.58
15
47.499
9.50
37.934
7.59
31
95.965
19.19
78.644
15.73
63
191.709
38.34
161.554
32.31
Binding: more work upfront, faster from that point on
Binding connects an expression to a model and starts watching for changes. It happens when you call rsx(expression)(model). This is where v2.0.0 shows its main tradeoff.
Take a data table with 1,000 rows, each bound to the same expression a + b on its own row model. v1.0.0 set that up in about 25 ms. v2.0.0 takes 43–46 ms. An extra 18 ms.
The reason: v1.0.0 deferred per-model setup work (watcher registration, evaluate-manager initialisation) to the first time each binding was read. Binding 1,000 rows was 1,000 cheap AST clones and little else. v2.0.0 does that setup synchronously at bind time — so 1,000 rows means 1,000 full setups immediately at bind time. Each binding arrives fully initialised and every subsequent update can start immediately without lazy-init checks.
The payback is fast. Bulk update at 1,000 bindings drops from 7.9 ms in v1.0.0 to 2.4–2.9 ms — saving 5–5.5 ms on every update cycle. The 18 ms extra is recovered after 3–4 updates. Any live table that re-renders when data changes will cross that threshold within seconds.
At 10,000 rows, the picture changes further. Compiled mode bind same takes 441 ms vs v1.0.0's 638 ms — the shared compiled plan amortises per-binding setup cost at scale, and bind same is now 31% faster than v1.0.0. Tree mode at 10,000 is still slower to bind, but its bulk-update saving (73 ms vs 146 ms) recovers that cost within two update cycles.
1,000
38.350
43.373
3,000
143.833
142.731
5,000
260.666
298.487
10,000
737.067
884.867
These are tree mode numbers. Compiled mode bind same at 1,000 is slightly higher (46 ms); at 10,000 it reverses to 441 ms. Full compiled vs tree comparison is in the reference section below.
Updates: the 60–70% improvement that matters most
After binding, updates are where your application spends most of its time. A field changes, rs-x notifies only the expressions that read that field, and subscribers receive new values. Two scenarios to understand:
One field on one row. The cost is O(1) — it does not grow with how many total bindings are active. The true per-update cost is around 0.5 µs at every binding count. The numbers in the table below vary because performance.now() is unreliable for sub-millisecond single-shot measurements at this scale; the actual work is constant and near-instantaneous.
One field changes on every row (bulk update). This is the core metric for live data tables. At 1,000 bindings, v1.0.0 needed 7.9 ms; v2.0.0 needs 2.4–2.9 ms. At 10,000 bindings: 146 ms down to 61–73 ms. Two changes drive this: shared watchers (one watcher per model field instead of one per expression, so a single field change propagates through one path instead of fanning out to duplicates) and inline binary evaluation (recalculating a + b reads both operands directly with no intermediate array allocation per recalculation).
TreeCompiled
Bindings
Tree bulk update (ms)
Compiled bulk update (ms)
1,000
2.388
2.873
3,000
13.048
18.277
5,000
21.263
28.310
10,000
72.809
61.112
Cleanup: O(N) and predictable
When you dispose a set of bindings, rs-x walks the binding graph in one pass and releases every watcher and subscriber. The underlying storage uses a Map, so each removal is O(1) and the total cost for N bindings is O(N). No manual teardown is needed in application code.
Per-binding dispose cost holds at 0.03–0.04 ms across all measured scales. Disposing 10,000 bindings takes around 300–400 ms in the worst case — the worst case being the allocation-heavy unique-expression scenario where every binding has its own independent watcher. In a real table where many rows watch the same field, the shared watcher is released once regardless of how many bindings used it, so dispose is cheaper still.
Bindings
Dispose median (ms)
ms / binding
1,000
13.77
0.0138
2,000
63.81
0.0319
3,000
103.58
0.0345
4,000
147.08
0.0368
Putting the numbers in perspective
The benchmarks deliberately go to extremes — thousands of bindings all triggered at once. Real applications look very different. A typical form or detail page has 20–100 bindings. A data table with 100 rows and 5 bound columns has 500 bindings. At those scales, bind cost is in single-digit milliseconds and update cost is sub-millisecond — both comfortably inside the 100 ms threshold where users start to notice lag.
The bind-same regression scales proportionally with row count. The extra ~18 ms is for 1,000 rows. For 100 rows it is around 2 ms. For a 50-row table it is under 1 ms. If your tables are paginated or virtualised, you are never binding thousands of rows simultaneously — the regression does not apply at all.
The one scenario where v1.0.0has an edge: a large table that renders once and never receives data updates. There, v1's deferred setup was genuinely cheaper and v2.0.0 offers no advantage. Any table whose data can change crosses the payback threshold quickly and runs faster on every subsequent update.
rs-x tracks dependencies through plain model assignment:
model.price = 42;
No subscriptions to manage, no lifecycle hooks, no explicit invalidation. Async values — Observable, Promise, BehaviorSubject — resolve transparently into expression values. The numbers above measure the runtime cost of all of that working automatically.
Machine and run conditions
Machine: Apple M4, 16.0 GB RAM, darwin/arm64, OS 24.6.0, Node v25.4.0.
Parse scenarios run 5,000 operations per sample and do not bind expressions to models.
Run flags: node --expose-gc --max-old-space-size=4096. --expose-gc lets the benchmark force a full GC between samples so heap measurements reflect only the scenario under test.
Same-model generated expressions (compiled vs tree)
This scenario binds 1,000 generated expressions to the same model object. Each expression is a long arithmetic chain — roughly 60–120+ nodes, repeatedly using x and y. Representative shape: (((x + y) + ((x + y) + n) - a) * b) / ((x + y) + c) repeated many times in one expression string. This is the scenario where compiled mode has its largest advantage.
Bindings
Metric
Compiled (ms)
Tree (ms)
Compiled vs tree
1,000
Bind
11.884
358.387
96.7% faster
1,000
Dispose
1.945
24.203
92.0% faster
1,000
Single update
5.560
49.734
88.8% faster
1,000
Bulk update
34.887
393.438
91.1% faster
Heap usage (compiled vs tree)
Scenario
Bindings
Metric
Compiled (MB)
Tree (MB)
Sync identifier
1,000
bind
74.0
76.0
single update
51.0
47.1
bulk update
55.2
51.2
Async identifier
1,000
bind
164.7
170.0
single update
149.3
150.4
bulk update
152.3
153.3
Same-model generated expressions
1,000
bind
515.3
1499.7
dispose
515.3
1499.7
single update
512.3
1495.7
bulk update
533.1
1527.3
Peak RSS (compiled vs tree)
Scenario
Bindings
Metric
Compiled (MB)
Tree (MB)
Sync identifier
1,000
bind
222.5
218.3
single update
222.7
218.2
bulk update
223.4
218.8
Async identifier
1,000
bind
719.6
729.6
single update
719.6
729.7
bulk update
719.6
729.8
Same-model generated expressions
1,000
bind
1103.7
1734.9
dispose
1103.7
1734.9
single update
1104.2
1741.1
bulk update
1104.6
1759.4
Identifier-only binding (most common real-world pattern)
The most common binding in a real app is a single identifier — row.status, user.name — bound to its own model object. Each expression is a single-node tree: no operators, no member chains. This is the simplest and cheapest case.
Bindings
Bind (ms)
Bind+initialize (ms)
Single update (ms)
Bulk update (ms)
µs / binding
100
2.256
371.406
0.008
26.464
22.56
500
70.174
95.432
0.008
4.658
140.35
1,000
66.185
111.798
0.002
3.010
66.19
3,000
161.463
234.292
0.003
37.485
53.82
5,000
275.182
309.953
0.002
24.942
55.04
10,000
758.932
902.250
0.002
55.082
75.89
Shared-identifier binding scenario
In practice, many expressions across many rows read the same small set of model fields. This scenario binds N expressions to one model while reusing only 10 identifiers across all bindings. It validates shared-watch behaviour: regardless of how many bindings are added, only 10 watcher subscriptions are ever created.
At 10,000 bindings this shared scenario is about 1.0x faster than the unique-identifier stress case.