rs-x v2 rewrites the parser and introduces a compiled expression engine. This page shows every benchmark metric side by side between v1.0.0 and v2.0.0 (both tree and compiled modes). Measured on Apple M4, Node.js v25.4.0.
Summary
v2 ships two fundamental changes: a new recursive-descent parser that eliminates overhead from the old parser's fixed startup cost, and a compiled expression engine. The AOT compiler generates a native JS function for each expression at build time; at runtime rs-x looks up and calls the pre-generated function directly. The compiled engine is the source of most update improvements.
The one regression is upfront bind cost: v2 does more work per binding than v1 — it compiles the expression, sets up a plan cache entry, and registers typed watchers. For applications that are update-heavy relative to bind count (the common case), this cost pays back quickly.
Metric
Unit
v1.0.0
v2 tree
v2 compiled
Tree gain
Parse 1 nodes
us/op
5.482
0.731
0.771
+86.7%
Parse 3 nodes
us/op
6.993
1.866
1.887
+73.3%
Parse 7 nodes
us/op
10.524
4.095
4.088
+61.1%
Parse 15 nodes
us/op
17.710
8.486
8.678
+52.1%
Parse 31 nodes
us/op
25.173
17.528
17.766
+30.4%
Parse 63 nodes
us/op
44.295
35.618
35.775
+19.6%
Parse+clone 63 nodes
us/op
80.986
38.342
199.400
+52.7%
Bind unique 1,000
ms
35.092
38.350
32.317
-9.3%
Bind same 1,000
ms
25.444
43.373
45.661
-70.5%
Bind unique 10,000
ms
521.444
737.067
561.750
-41.4%
Bind same 10,000
ms
638.054
884.867
440.759
-38.7%
Single update 1,000~
ms
0.089
0.009
0.008
+90.4%
Bulk update 1,000
ms
7.904
2.388
2.873
+69.8%
Single update 10,000~
ms
0.107
0.002
0.002
+98.1%
Bulk update 10,000
ms
146.234
72.809
61.112
+50.2%
~ = high variance measurement; treat as indicative. Positive gain % = v2 faster than v1.
Parsing: up to 87% faster
The v2 parser uses a hand-written recursive-descent approach instead of the general-purpose parser used in v1. The most dramatic improvement is at the low end: a single-identifier expression dropped from 5.5 µs to 0.7 µs — 87% faster. Larger expressions improve 20–30% as the fixed overhead becomes a smaller fraction of total parse time.
Both modes use the same parser — the parse performance numbers are identical for tree and compiled modes.
v1.0.0v2.0.0
Nodes
v1.0.0 (µs)
v2.0.0 (µs)
Gain
1
5.482
0.731
+87%
3
6.993
1.866
+73%
7
10.524
4.095
+61%
15
17.710
8.486
+52%
31
25.173
17.528
+30%
63
44.295
35.618
+20%
Binding: v2 costs more upfront
Bind cost in v2 is slightly higher than v1, but this is largely a cost shift. v2 resolves all expression dependencies and builds the full watch graph once at bind time — work that v1 deferred to every individual update evaluation. In tree mode, bind cost is within ~10% of v1 for unique expressions. Compiled mode has a similar bind profile; the savings show up at evaluation time.
The tradeoff is that subsequent calls to update the same binding are significantly faster, and memory usage is lower (compiled plans are shared). For most applications that bind once and update many times, this is a net positive.
v1.0.0v2 treev2 compiled
Bindings
v1 bind (ms)
v2 tree (ms)
v2 compiled (ms)
1,000
35.092
38.350
32.317
3,000
121.675
143.833
106.509
5,000
235.588
260.666
193.635
10,000
521.444
737.067
561.750
Updates: 60–70% faster
Updates are where v2 wins clearly. Tree mode is already faster than v1 for bulk updates — the new watcher architecture notifies only affected expressions more efficiently. Compiled mode is faster still: it calls the pre-compiled JS function directly, which V8 JIT-optimises as a regular function call.
At 10,000 bindings with a bulk update, v2 compiled mode takes 61 ms vs 146 ms in v1 — 58% faster. Single update times are very small in all versions and not the meaningful comparison.
v1.0.0v2 treev2 compiled
Bindings
v1 bulk update (ms)
v2 tree (ms)
v2 compiled (ms)
Best gain
1,000
7.904
2.388
2.873
+70%
3,000
29.483
13.048
18.277
+56%
5,000
55.091
21.263
28.310
+61%
10,000
146.234
72.809
61.112
+58%
Parse cache (parse + clone): v2 tree is faster
When an expression is already cached and a new binding clones the cached AST, v2 tree mode is consistently faster than v1. The v1 clone included more allocations per node; v2 uses a tighter object structure. Compiled mode replaces AST cloning with a plan cache lookup, which has different cost characteristics.