Avoiding TypeScript compilation in Node

Published on: Mon Dec 19 2022

Introduction

Unlike newer JavaScript runtimes, such as Deno or Bun, Node does not run TypeScript natively. That means, that you have to transpile TypeScript files first to JavaScript to be able to run them.

$ node src/index.ts        # This doesn't work
$ tsc && node src/index.js # This works

Personally, I hate that intermediate building step, especially when I’m sharing code between packages or not have automatic CI/CD pipelines setup. I much rather like using Deno for TypeScript projects, but sometimes Node just offers the better architectural echosystem (like Hosting and Serverless), that force me to use Node over Deno.

I also ran into this “problem” in a project of mine, where I had a monorepository and I was coding up an API for a new feature. The whole project was written in TypeScript, which wasn’t a problem, because the production components were 2 NextJS applications, for which I had CI/CD setup, so I didn’t have to worry about TypeScript compilation. But for architectural reasons this API was going to run on a VPS and managed manually.

I used fastify for the HTTP server and setup a couple of scripts in the package.json.

  • A dev script that runs tsx src/index.ts to run the
  • A build script that runs tsc to output JavaScript files
  • A start script that runs node dist/index.js to run the server with node directly

Pretty standard for a TypeScript application. Use a tool like tsx or ts-node to run the TypeScript files directly (not really obviously, but the tools hide the transpilation step from you) to make the DX smoother and faster.

But then I was wondering… why not only keep the start script and make it run the command tsx src/index.ts. That way I didn’t have to worry about forgetting to compile my TypeScript source code when I deployed an update to the API, since I didn’t have an automatic CI/CD pipeline.

So I decided to investigate and see what the tradeoff would be, if I used a tool to run TypeScript with Node directly. Maybe it will be performance? memory usage? Let’s find out.

Methodology

You can find the code for my experiments in this repo: daniellionel01/node-ts-performance

I wanted to benchmark pure computational speed and network speed. So after some googling I found @thi.ng/bench, a benchmarking tool from an amazing open source software collection.

For the pure computational speed benchmark I used the example from the @thi.ng/bench package, which was using calculating fibonacci sequences.

For the network speed benchmark I used autocannon to measure and setup a fastify api that just returns a bit of json on the / route.

For each benchmark I measured it using tsx, ts-node and using regular node with JavaScript.

Results

Here are the results of our, in total, 6 benchmarks.

Fibonacci

TSX

$ yarn fib:tsx
 814.92ms
 0.05ms
 428.53ms
 62.49ms
TitleIterSizeTotalMeanMedianMinMaxQ1Q3SD%
fib2(10)1010000051.355.134.744.627.384.695.7515.83
fib2(20)10100000123.5212.3512.2511.9813.6412.2112.523.73
fib2(30)10100000153.0115.3015.3715.0915.5115.1915.451.00
fib2(40)10100000186.7918.6818.4517.6920.7718.4419.304.21

ts-node

$ yarn fib:ts-node
 742.32ms
 0.05ms
 385.93ms
 71.95ms
TitleIterSizeTotalMeanMedianMinMaxQ1Q3SD%
fib2(10)1010000048.984.904.594.337.764.425.0419.97
fib2(20)10100000113.0111.3011.2410.9212.5011.2111.283.69
fib2(30)10100000147.8514.7914.4413.9317.6514.3315.767.18
fib2(40)10100000168.4716.8516.8916.4017.2516.8117.181.54

JavaScript

$ yarn fib:js
 796.97ms
 0.08ms
 415.02ms
 64.00ms
TitleIterSizeTotalMeanMedianMinMaxQ1Q3SD%
fib2(10)1010000052.865.294.964.917.614.945.6015.11
fib2(20)10100000133.9013.3913.3412.8614.6313.2013.673.57
fib2(30)10100000168.1816.8216.9115.9117.6716.7917.012.52
fib2(40)10100000183.7318.3718.2717.9319.5018.0918.762.50

HTTP Server

TSX

$ yarn fib:bench
Running 10s test @ http://localhost:8080
10 connections


┌─────────┬──────┬──────┬───────┬──────┬─────────┬─────────┬───────┐
 Stat 2.5% 50% 97.5% 99% Avg Stdev Max
├─────────┼──────┼──────┼───────┼──────┼─────────┼─────────┼───────┤
 Latency 0 ms 0 ms 0 ms 0 ms 0.02 ms 0.14 ms 12 ms
└─────────┴──────┴──────┴───────┴──────┴─────────┴─────────┴───────┘
┌───────────┬─────────┬─────────┬─────────┬─────────┬──────────┬─────────┬────────┐
 Stat 1% 2.5% 50% 97.5% Avg Stdev Min
├───────────┼─────────┼─────────┼─────────┼─────────┼──────────┼─────────┼────────┤
 Req/Sec 17119 17119 29007 29311 27875.64 3411.48 17108
├───────────┼─────────┼─────────┼─────────┼─────────┼──────────┼─────────┼────────┤
 Bytes/Sec 3.22 MB 3.22 MB 5.46 MB 5.51 MB 5.24 MB 641 kB 3.22 MB│
└───────────┴─────────┴─────────┴─────────┴─────────┴──────────┴─────────┴────────┘

Req/Bytes counts sampled once per second.
# of samples: 11

307k requests in 11.02s, 57.6 MB read

ts-node

$ yarn fib:bench
Running 10s test @ http://localhost:8080
10 connections


┌─────────┬──────┬──────┬───────┬──────┬─────────┬─────────┬───────┐
 Stat 2.5% 50% 97.5% 99% Avg Stdev Max
├─────────┼──────┼──────┼───────┼──────┼─────────┼─────────┼───────┤
 Latency 0 ms 0 ms 0 ms 0 ms 0.01 ms 0.12 ms 13 ms
└─────────┴──────┴──────┴───────┴──────┴─────────┴─────────┴───────┘
┌───────────┬─────────┬─────────┬────────┬─────────┬─────────┬─────────┬─────────┐
 Stat 1% 2.5% 50% 97.5% Avg Stdev Min
├───────────┼─────────┼─────────┼────────┼─────────┼─────────┼─────────┼─────────┤
 Req/Sec 20623 20623 28719 29215 28067.2 2494.06 20614
├───────────┼─────────┼─────────┼────────┼─────────┼─────────┼─────────┼─────────┤
 Bytes/Sec 3.88 MB 3.88 MB 5.4 MB 5.49 MB 5.28 MB 469 kB 3.88 MB
└───────────┴─────────┴─────────┴────────┴─────────┴─────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.
# of samples: 10

281k requests in 10.02s, 52.8 MB read

JavaScript

$ yarn fib:bench
Running 10s test @ http://localhost:8080
10 connections


┌─────────┬──────┬──────┬───────┬──────┬─────────┬─────────┬───────┐
 Stat 2.5% 50% 97.5% 99% Avg Stdev Max
├─────────┼──────┼──────┼───────┼──────┼─────────┼─────────┼───────┤
 Latency 0 ms 0 ms 0 ms 0 ms 0.01 ms 0.11 ms 15 ms
└─────────┴──────┴──────┴───────┴──────┴─────────┴─────────┴───────┘
┌───────────┬─────────┬─────────┬────────┬─────────┬──────────┬─────────┬─────────┐
 Stat 1% 2.5% 50% 97.5% Avg Stdev Min
├───────────┼─────────┼─────────┼────────┼─────────┼──────────┼─────────┼─────────┤
 Req/Sec 22511 22511 32431 32639 31453.82 2845.98 22507
├───────────┼─────────┼─────────┼────────┼─────────┼──────────┼─────────┼─────────┤
 Bytes/Sec 4.24 MB 4.24 MB 6.1 MB 6.14 MB 5.91 MB 534 kB 4.23 MB
└───────────┴─────────┴─────────┴────────┴─────────┴──────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.
# of samples: 11

346k requests in 11.02s, 65 MB read

Conclusion

From the results from our benchmarks we can conclude the following:

  • ts-node had better network performance than tsx
  • tsx had better pure computational performance than ts-node
  • node had better performance overall, but didn’t have a humongous impact (like a x2 improvement)

My conclusion is that, for my use case, using tsx directly is worth it. I’d rather have a couple of % points less maximum throughput (which isn’t even relevant for this use case anyways, but in theory), than to forget to run yarn build and a bug staying unfixed for multiple days and hours of debugging.

But you should definitely take my tests with a big grain of salt. If you’re evaluating the same question for your own project, run them on your own hardware, modify them, run them multiple times over the span of a couple of minutes instead of just once while spotify is running on the same laptop and using your CPU in an unpredictable manner.

Sources