Core Concepts

Compiled vs tree: the break-even point

rs-x runs in two modes. Tree mode evaluates expressions by walking the parsed AST. Compiled mode compiles each expression to a native JavaScript function at build time. Both modes can be mixed an configured per expression via the expression compile option. By default expressions are compiled, because compiled mode is faster for most scenarios. The only tradeoff is that compiling can increase the loading time because the file size can become quite big if you have a lot of complex expression. But most of the time expression are just identifiers, so the file size will beminimal. You can mitigate big file size by only compiling the expressions that by using lazy loading the expression not needed on the home page and maybe load them in the bakcground if you use rs-x in a wb application.

Why the costs differ

Every expression is parsed into an AST. The number of nodes in that AST grows with expression complexity — a simple price has 1 node; price * quantity * (1 - discount) has 7; a deeply nested arithmetic expression can have hundreds.

Tree mode clones the full AST for each new binding. Clone cost is proportional to node count. On every update, it walks all nodes again to evaluate the expression — also proportional to node count.

Compiled mode compiles the expression to a JS function once (at first bind), caches the compiled plan, and shares it across all bindings. Each binding just records which model fields to watch — the plan stores only the unique dependencies, not the full AST. On every update, it calls the compiled function directly, which V8 can JIT optimise as a native function.

The benchmark below uses expressions of the form x + y + x + y + … — always exactly 2 unique dependencies (x and y), regardless of how many nodes the expression has. This isolates the effect of AST size while keeping the dependency count constant. Measured on Apple M4, Node.js v25.4.0, 1,000 bindings.

Bind time vs expression size

Creating a new binding requires setting up one watcher entry per unique dependency. In compiled mode, the bind cost scales with the number of unique dependencies — not the total node count. Because the benchmark expressions always have exactly 2 unique dependencies (x and y) regardless of how many nodes the expression has, the compiled bind cost stays nearly flat as node count grows. In tree mode, the full AST must be cloned — the cost grows linearly with node count.

The lines cross at approximately ~7 nodes. Above that point, compiled mode is consistently faster for binding, and the advantage grows rapidly with expression size.

AST nodesTree bind (ms)Compiled bind (ms)Compiled faster by
337 ms33 ms1.1×
766 ms70 mstree 1.1× faster
15330 ms100 ms3.3×
271231 ms145 ms8.5×
47596 ms166 ms3.6×
791043 ms130 ms8.0×
1311693 ms528 ms3.2×
2192627 ms521 ms5.0×
3594179 ms338 ms12.3×

Bulk update time vs expression size

When all 1,000 models change at once, every binding re-evaluates. In tree mode, re-evaluation walks the full AST — the cost grows with node count. In compiled mode, re-evaluation calls the pre-compiled JS function — the cost stays nearly flat because V8 treats it as a native function call.

The lines cross at approximately ~12 nodes. At 359 nodes, compiled bulk updates are 11× faster than tree.

AST nodesTree update (ms)Compiled update (ms)Compiled faster by
36.9 ms7.4 mstree 1.1× faster
78.4 ms11 mstree 1.3× faster
1515 ms13 ms1.1×
2725 ms14 ms1.7×
4730 ms15 ms2.0×
7944 ms18 ms2.4×
13172 ms17 ms4.1×
219156 ms18 ms8.8×
359622 ms55 ms11.4×

Single update time vs expression size

When one model field changes, one binding re-evaluates. The same pattern holds: tree mode re-walks the AST (cost grows with size), compiled mode calls the pre-compiled function (cost stays flat). Because only one binding fires, the absolute times are very small — tenths of a millisecond — but the crossover is still visible around ~25 nodes.

AST nodesTree update (ms)Compiled update (ms)Compiled faster by
30.065 ms0.068 msequal
70.062 ms0.073 mstree 1.2× faster
150.079 ms0.096 mstree 1.2× faster
270.111 ms0.108 msequal
470.118 ms0.104 ms1.1×
790.127 ms0.110 ms1.2×
1310.184 ms0.111 ms1.7×
2190.210 ms0.121 ms1.7×
3591.7 ms0.135 ms12.7×

Which mode should you use?

For most real-world expressions — arithmetic, member chains, conditionals — the break-even is reached almost immediately. The advantage compounds with expression complexity: a deeply nested formula with 100+ nodes binds 6–9× faster in compiled mode and updates 3–7× faster.

To opt in to compiled mode, pass { compiled: true } as the second argument: rsx(expression, { compiled: true })(model). To use tree mode, omit the option or pass { compiled: false }. Tree mode may be preferable if you have a large number of very simple single-identifier expressions and minimising JIT warm-up time matters more than update throughput.

The benchmark expressions here always have exactly 2 unique dependencies. The real crossover in your application depends on your specific expression shapes. Expressions with more unique dependencies but fewer nodes (e.g. a flat sum of many different fields) will see a later crossover for bind time; expressions with many nodes but few unique dependencies (e.g. a complex formula reusing the same fields) will see a much earlier one.