NodeJS Native Module vs WASM

·

In my previous post about Native Rust Modules for NodeJS, people asked me how neon bindings would compare to WASM. Let’s check!

What is WASM?

WebAssembly (sometimes abbreviated Wasm) is an open standard that defines a portable binary-code format for executable programs, and a corresponding text format, as well as interfaces for facilitating interactions between such programs and their host environment.The main goal of WebAssembly is to enable high-performance applications on web pages, but the format is designed to be executed and integrated in other environments as well, including standalone ones.

Wikipedia

WASM (or WebAssembly) is a binary code. It is executed by a portable VM that is currently implemented in all major browsers.

However, as Assembly, WASM is not that pleasant to write in and therefore other languages, and Rust among them, support WASM as compilation target.

The purpose of this article is to:

  1. Do a quick introduction to compiling Rust into WASM
  2. Benchmark Pure JS, Native Module and WASM Module implementations of the same function
  3. Provide basic guidelines towards when to choose either implementation

Please make sure you’ve read my previous article, because this one is built on it. You can find the full code for this tutorial on GitHub. Let’s dive in.

How to compile Rust to WASM

For native modules we have Neon, for WASM we have wasm-bindgen. wasm-bindgen gives the tools needed for importing JS functions into Rust and exporting Rust function to JS. Let us look at how we can create a WASM version of our fibonacci function.

First, we need to make sure our Cargo.toml lists wasm-bindgen as dependency like this:

wasm-bindgen = { version = "0.2.78"}

After that, everything that is left, is to mark our function with a special macro, like this:

use wasm_bindgen::prelude::*;

#[wasm_bindgen]
fn fibonacci(n: i32) -> i32 {
    return match n {
        n if n < 1 => 0,
        n if n <= 2 => 1,
        _ => fibonacci(n - 1) + fibonacci(n - 2)
    };
}

And that’s it! 🎉

Thank you for reading. See you in the next one.

Not really. I suggest you read the wasm-bindgen Guide, there is a lot of information about how to interact with JS from Rust and vice versa and information regarding what types are supported and how you can transfer them between the two languages.

In order to build the WASM, we need another tool, called wasm-pack, We then can execute the following command wasm-pack build --target nodejs, which will produce a NodeJS compatible WASM (you can read more about different targets here). The resulting code will be placed in pkg directory and will contain 4 files:

  1. index.d.ts – Typescript definitions for our module
  2. index.js – The main file that will initialize the WebAssembly module and create a WebAssembly instance that we can interact with (feel free to take a look inside, its very readable)
  3. index.wasm – The actual machine code of the WebAssembly
  4. index.wasm.d.ts – Typescript definitions for the WASM file

You then simply require the files from pkg dir and use them as regular JavaScript (or TypeScript).

const {fibonacci_wasm} = require("./pkg/index");
const result = fibonacci_wasm(number);

What about Performance?

Ah. Performance. The thing that Javascript developers care for the most.

I’m going to use the hyperfine tool, run each fibonacci with 3 warmup runs, computing different numbers, taking the mean running time, and present you the results. Please note: These are not laboratory grade benchmarks.

All benchmarks are run on 2020 Mac Mini M1 with 16GB of memory.

Runtime30th Fibonacci44th Fibonacci45th Fibonacci46th Fibonacci
JavaScript (NodeJS)165.2ms5.846s9.358s15.038s
Native Rust161.5ms2.271s3.578s5.721s
Rust WASM163ms3.286s5.207s8.317s

Analysis

I hope no body is surprised that Rust took the 1st place, followed by WASM, while JavaScript was finishing last.

It’s interesting though to see that on low numbers, such as 30th Fibonacci number, all 3 methods performed roughly the same, with Rust being 2.23% faster and WASM being 1.33% faster. This proves again that you always need to benchmark a specific function / method before assuming that a switch to low-level languages will perform better.

However, once we go higher in the Fibonacci numbers, we can see a clear difference between the 3 methods.

We can see that for JavaScript, the jump between 44th and 45th Fibonacci numbers resulted in an increase of 60.07% in time, while the jump from 45th to 46th number resulted in 60.69% increase.

For Native Rust module, the increase resulted in 57.55% and 59.89% respectively, while for WASM the numbers were 58.46% and 59.72% respectively.

Conclusion 1: Rust was more efficient in computing the next number in the chain followed by WASM while JS taking bronze medal in this race.

Let’s continue. Native Rust was 61.15% faster than JS in computing the 44th number, while WASM was 43.79% faster. For 45th and 46th numbers, Rust was 61.76% and 61.95% faster respectively, while WASM was 44.35% and 44.69% faster, respectively.

Conclusion 2: Opting for Native Rust module, increased the performance by 60% on average! Opting for WASM, increased the performance by almost 45%.

Rust was 44.69% faster than WASM in computing the 44th number; 45.52% faster in computing the 45th number, and 45.37% faster in computing the 46th number.

Conclusion 3: Rust on average was 45% faster than WASM.

Summary

I hate benchmarks. Especially on useless functions like Fibonacci. You should always do side-by-side comparison for your use case, rather than rely on benchmarks like this one. However, this benchmark can give us some base line assumptions. We can see that JavaScript is fast enough on lower Fibonacci numbers, however, it struggles as we go higher.

It’s no surprise that WebAssembly is taking a strong second place. One of its goals was to provide near-native code execution speed. However, considering the fact that WebAssembly is executed by a VM, it will be hard to achieve real native performance, like with Rust.

What you should choose and when?

I won’t discuss when and why you should choose JavaScript. Chances are, if you are reading this article, that you’ve already chosen JavaScript but looking at how to improve performance in critical parts of your application. Therefore, I’ll focus on providing the differences between Native modules in Rust and WebAssembly.

Native or WebAssembly?

First, you need to understand the major difference between the two. Native modules are modules that are written in a compiled language like Rust or C/C++ and are imported via node-ffi to use in NodeJS applications. They are no different from .so or .dll files that are loaded by, say, Java JNI or any other language with FFI support.

WebAssembly on the other hand is a compilation target. What it means is that the best way to write WASM is to actually compile another language, such as Rust or C/C++ into WebAssembly. Yes, you can write WASM directly, by writing a WAT (WebAssembly Text) files and then translating them to WASM by using wabt, but why would you do that? I won’t argue that x86 Assembly is a useful language to know for certain applications (for example extreme optimization cases for Games or Digital Audio Workstations), WASM, on the other hand, as a language is useless, because many higher-level languages can be compiled to WASM. And since WASM is mainly a compilation target, I assume the difference in performance between a C++ or Rust implementation compiled to WASM will boil down to the actual performance of the code written in C++/Rust, and how good are the tools that compile said code to WASM (Emscripten in that case). Other than that, there should be no performance difference whether the WASM was compiled from C or from Rust.

Having said that, there is one big difference between the two. When I’ve started to do the research for this article, I wanted to focus on a more realistic problem rather than Fibonacci. I’ve downloaded at 14.7MB CSV file that contains ranges of IPv4 addresses that are mapped to different countries. My idea was to parse this CSV using the 3 methods, load the data into memory as an array, and then perform a lookup of 50 random IPs to find out to which countries they belong to. This would simulate a scenario where you have huge chunk of data being loaded into memory and you need to scan it to find a specific value.

I’ve successfully completed the task for JavaScript and for Native Rust module, but when I’ve finished writing it for WASM, upon opening the CSV file from the Disk - I’ve got a panic call. This was the first time I’ve realized the biggest difference between Native Rust module and WASM.

A short trip to the land called std - The standard library

You see, compiled languages can be compiled for different operating systems and target architectures. The two biggest architectures as of today are x86 / x86_64 (The standard 32bit and 64bit architecture we know, that are implemented by Intel and AMD) and armv7 thanks to Smartphones, Raspberry PI and the recent Mac M1 processors. Each architecture handles things like floats and math differently, therefore gcc (the C compiler) and rustc (the Rust compiler) need to know how to produce the machine code suited for each architecture. On top of all that, each operating system such as Linux, MacOS and Windows, has it’s own way to manage things like file descriptors and network operations. When you write File::open in Rust, what it does is actually calling the OS defined method to handle file opening. It fact, the File struct in Rust is no more than simply

pub struct File {
    inner: fs_imp::File,
}

While fs_imp::File being the actual filesystem implementation (be it NTFS or ext4) which is handled by the OS . 1

std is very interesting, I even had some small experience implementing my own memset, around 10 years ago when I was toying with my own OS kernel. But what’s the connection to WASM?

Well, remember I’ve said that WASM is a compilation target? So essentially, when you are running wasm-pack somewhere down the line it calls cargo build (which in turns calls rustc) and it passes cargo a special argument, --target that is equal to wasm32-unknown-unknown. It mens that we are compiling a wasm 32 bit code on an unknown vendor with unknown system (other examples of target might include things like x86_64-pc-windows-msvc meaning that we are compiling to an Intel/AMD 64bit architecture, on PC, for Windows using the MSVC ABI. You can list all supported targets with rustc --print target-list).

And since WASM is a compilation target with unknown vendor and system, there is no stdlib!

In fact, we can go to the unsupported system in Rust source code, and see that File::open actually calls to unsupported() 2 function, which simply panics.

And this leads me to the biggest difference between the two — if you need to use any of the stdlib utils such as accessing the filesystem, accessing network, threads and anything related to the OS - chose native Modules. WASM simply can’t support this functionality, since it was designed to be executed by a VM that is running in a sandboxed environment.

Correction

Upon reading more, I’ve realized that there are 2 compilation targets for WASM: wasm32-unknown-unknown which we’ve already seen and wasm32-wasi. WASI or Web Assembly System Interface is, a still in development, standard to get safe access to some resources of the OS such as fd_read, fd_write and etc. Some WASM VMs provide support for WASI, but for my understanding, no browser currently supports WASI as it’s not fully standardized. Back to the article.

Below is a simple guideline when to choose Native vs WebAssembly (as I don’t have vast experience with both methods in order to give you a definitive flowchart).

Comparison between Native Modules and WebAssembly

Performance

Due to the fact that WASM is executed by a VM, native modules will, most likely, be more performant that their WASM counterpart.

Reusability

Native modules are also reusable. We can use them in any other language that supports FFI. So, for example, if you have a shared logic in Rust native module, you can load it from Nodejs and from Python (using CFFI) or Ruby (using Ruby FFI). WebAssembly on the other hand, can only be run by WASM VM.

Ergonomics

In my opinion, the ergonomics of wasm_bindgen are way better than neon. I like neon, don’t get me wrong, but all you need to do in order to export a function from Rust to WASM is to add #[wasm_bindgen] before the function. With neon you need to mess with conversions.

Portability

Native modules depend on the host machine. WASM being run by a VM is very portable format — get a .wasm file and it’s guaranteed to run in any environment that has a WASM VM. Native modules need to be recompiled for each host. It’s not that big of a deal if you are using lean docker containers such as alpine but it is something you need to be aware of, and the “I don’t know, it works on my machine” becomes a real issue.

Entry barrier for new developers

Native modules can be written, mainly, in two languages: C/C++ or Rust. C/C++ get a lot of hate, while Rust is the most loved language 3. But Rust is not simple. In C/C++ you fight with core dumps, in Rust you fight the borrow checker. It’s not the most simple language to grasp for someone who is not familiar with the concepts of memory management and pointers.

WASM being a compilation target, can be produced from many languages. The main ones are C/C++ and Rust, but there is also partial support for Python, Java, Ruby and Go. And once WASM will support multi-threading and garbage collection, C# will be a candidate as well. More over, there is a special language, with TypeScript like syntax, called AssemblyScript, that was created with one purpose — to be compiled to WASM.

So the entry barrier to WASM is simpler, in my opinion.

Node vs Browser

While both this post and my previous one focuses mainly on NodeJS, we can’t forget the browser. JavaScript is still, and probably forever will be, king in the Browser. And Native modules can’t be executed in the browser. Or any other place that is not NodeJS runtime. JavaScript does not (yet?) support FFI.

So in case you are developing for the browser, or any other system that has access to JavaScript runtime and WASM VM – your only choice is WebAssembly.

Conclusion

Native modules and WebAssembly are both performance optimizations to JavaScript, but they are here to solve different problems. Native modules are here to extend NodeJS with performant code while giving you full access to stdlib. WebAssembly is here to replace non-performant JavaScript code with a near-native performance binary that is executed in a sandboxed environment.

Footnotes

  1. Rust fs.rs

  2. Rust fs.rs

  3. Stackoverflow 2021 survey

Share this:

Published by

Dmitry Kudryavtsev

Dmitry Kudryavtsev

Senior Software Engineer / Tech Entrepreneur

With more than 14 years of professional experience in tech, Dmitry is a generalist software engineer with a strong passion to writing code and writing about code.


Technical Writing for Software Engineers - Book Cover

Recently, I released a new book called Technical Writing for Software Engineers - A Handbook. It’s a short handbook about how to improve your technical writing.

The book contains my experience and mistakes I made, together with examples of different technical documents you will have to write during your career. If you believe it might help you, consider purchasing it to support my work and this blog.

Get it on Gumroad or Leanpub


From Applicant to Employee - Book Cover

Were you affected by the recent lay-offs in tech? Are you looking for a new workplace? Do you want to get into tech?

Consider getting my and my wife’s recent book From Applicant to Employee - Your blueprint for landing a job in tech. It contains our combined knowledge on the interviewing process in small, and big tech companies. Together with tips and tricks on how to prepare for your interview, befriend your recruiter, and find a good match between you and potential employer.

Get it on Gumroad or LeanPub