rs-x runs in two modes. Tree mode evaluates expressions by walking the parsed AST. Compiled mode compiles each expression to a native JavaScript function at build time. Both modes can be mixed an configured per expression via the expression compile option. By default expressions are compiled, because compiled mode is faster for most scenarios. The only tradeoff is that compiling can increase the loading time because the file size can become quite big if you have a lot of complex expression. But most of the time expression are just identifiers, so the file size will beminimal. You can mitigate big file size by only compiling the expressions that by using lazy loading the expression not needed on the home page and maybe load them in the bakcground if you use rs-x in a wb application.
Why the costs differ
Every expression is parsed into an AST. The number of nodes in that AST grows with expression complexity — a simple price has 1 node; price * quantity * (1 - discount) has 7; a deeply nested arithmetic expression can have hundreds.
Tree mode clones the full AST for each new binding. Clone cost is proportional to node count. On every update, it walks all nodes again to evaluate the expression — also proportional to node count.
Compiled mode compiles the expression to a JS function once (at first bind), caches the compiled plan, and shares it across all bindings. Each binding just records which model fields to watch — the plan stores only the unique dependencies, not the full AST. On every update, it calls the compiled function directly, which V8 can JIT optimise as a native function.
The benchmark below uses expressions of the form x + y + x + y + … — always exactly 2 unique dependencies (x and y), regardless of how many nodes the expression has. This isolates the effect of AST size while keeping the dependency count constant. Measured on Apple M4, Node.js v25.4.0, 1,000 bindings.
Bind time vs expression size
Creating a new binding requires setting up one watcher entry per unique dependency. In compiled mode, the bind cost scales with the number of unique dependencies — not the total node count. Because the benchmark expressions always have exactly 2 unique dependencies (x and y) regardless of how many nodes the expression has, the compiled bind cost stays nearly flat as node count grows. In tree mode, the full AST must be cloned — the cost grows linearly with node count.
The lines cross at approximately ~7 nodes. Above that point, compiled mode is consistently faster for binding, and the advantage grows rapidly with expression size.
TreeCompiled
AST nodes
Tree bind (ms)
Compiled bind (ms)
Compiled faster by
3
37 ms
33 ms
1.1×
7
66 ms
70 ms
tree 1.1× faster
15
330 ms
100 ms
3.3×
27
1231 ms
145 ms
8.5×
47
596 ms
166 ms
3.6×
79
1043 ms
130 ms
8.0×
131
1693 ms
528 ms
3.2×
219
2627 ms
521 ms
5.0×
359
4179 ms
338 ms
12.3×
Bulk update time vs expression size
When all 1,000 models change at once, every binding re-evaluates. In tree mode, re-evaluation walks the full AST — the cost grows with node count. In compiled mode, re-evaluation calls the pre-compiled JS function — the cost stays nearly flat because V8 treats it as a native function call.
The lines cross at approximately ~12 nodes. At 359 nodes, compiled bulk updates are 11× faster than tree.
TreeCompiled
AST nodes
Tree update (ms)
Compiled update (ms)
Compiled faster by
3
6.9 ms
7.4 ms
tree 1.1× faster
7
8.4 ms
11 ms
tree 1.3× faster
15
15 ms
13 ms
1.1×
27
25 ms
14 ms
1.7×
47
30 ms
15 ms
2.0×
79
44 ms
18 ms
2.4×
131
72 ms
17 ms
4.1×
219
156 ms
18 ms
8.8×
359
622 ms
55 ms
11.4×
Single update time vs expression size
When one model field changes, one binding re-evaluates. The same pattern holds: tree mode re-walks the AST (cost grows with size), compiled mode calls the pre-compiled function (cost stays flat). Because only one binding fires, the absolute times are very small — tenths of a millisecond — but the crossover is still visible around ~25 nodes.
TreeCompiled
AST nodes
Tree update (ms)
Compiled update (ms)
Compiled faster by
3
0.065 ms
0.068 ms
equal
7
0.062 ms
0.073 ms
tree 1.2× faster
15
0.079 ms
0.096 ms
tree 1.2× faster
27
0.111 ms
0.108 ms
equal
47
0.118 ms
0.104 ms
1.1×
79
0.127 ms
0.110 ms
1.2×
131
0.184 ms
0.111 ms
1.7×
219
0.210 ms
0.121 ms
1.7×
359
1.7 ms
0.135 ms
12.7×
Which mode should you use?
For most real-world expressions — arithmetic, member chains, conditionals — the break-even is reached almost immediately. The advantage compounds with expression complexity: a deeply nested formula with 100+ nodes binds 6–9× faster in compiled mode and updates 3–7× faster.
To opt in to compiled mode, pass { compiled: true } as the second argument: rsx(expression, { compiled: true })(model). To use tree mode, omit the option or pass { compiled: false }. Tree mode may be preferable if you have a large number of very simple single-identifier expressions and minimising JIT warm-up time matters more than update throughput.
The benchmark expressions here always have exactly 2 unique dependencies. The real crossover in your application depends on your specific expression shapes. Expressions with more unique dependencies but fewer nodes (e.g. a flat sum of many different fields) will see a later crossover for bind time; expressions with many nodes but few unique dependencies (e.g. a complex formula reusing the same fields) will see a much earlier one.