I’ve been writing JavaScript and TypeScript for years. It’s the language I think in, the ecosystem I know inside out. Node.js is what I reach for by default — fast to prototype, massive ecosystem, and good enough for most things.
But "good enough" has a ceiling. When I started building services that needed to handle tens of thousands of concurrent connections, process data with minimal latency, and run for months without memory creeping up — I started hitting that ceiling. That’s when I picked up Rust.
This isn’t a "Rust is better than Node" post. It’s a practical walkthrough of what the transition actually looks like — the mental shifts, the code patterns, and the real tradeoffs — from someone who still writes TypeScript daily.
Why Rust? #
The honest answer: performance and reliability.
Node.js is single-threaded by design. Yes, we have worker threads and the event loop handles I/O concurrently, but CPU-bound work blocks the main thread. For I/O-heavy web services, Node is excellent. For anything that involves heavy computation, data processing, or needs predictable low-latency responses — you feel the limits.
Rust gave me:
- No garbage collector — memory is freed deterministically, no GC pauses, no memory creep over long-running processes.
- True parallelism — real threads with zero-cost abstractions for concurrency.
- Compile-time guarantees — if it compiles, an entire class of bugs (null references, data races, use-after-free) simply cannot exist.
- Predictable performance — no JIT warmup, no deoptimization surprises. The performance you measure is the performance you get.
That said, Rust has a real learning curve. The first few weeks were humbling.
The Ownership Mental Model #
The single biggest shift coming from JavaScript to Rust is ownership. In JavaScript, you don’t think about who "owns" a value. Everything is garbage collected. You create objects, pass them around, and the runtime figures out when to clean them up.
In Rust, every value has exactly one owner. When the owner goes out of scope, the value is dropped. If you want to give a value to someone else, you move it — and the original variable becomes invalid.
fn main() {
let name = String::from("hello");
let greeting = name; // `name` is moved to `greeting`
println!("{}", name); // ERROR: value used after move
}In JavaScript, this would just work — both variables point to the same string, and the GC handles cleanup. In Rust, this is a compile-time error.
The first time you hit this, it feels like the compiler is fighting you. But there’s a reason: Rust is preventing you from having two variables that could both try to free the same memory, or from reading data that another part of your code is modifying.
Borrowing: The Solution #
Instead of moving values everywhere, Rust lets you borrow them — either as a shared reference (&T, read-only, multiple allowed) or a mutable reference (&mut T, exclusive, only one at a time).
fn greet(name: &str) {
println!("Hello, {name}!");
}
fn main() {
let name = String::from("Zevs");
greet(&name); // borrow `name` — we still own it
println!("{name}"); // still valid, we only lent it out
}The mental model that clicked for me: think of values like physical objects. You can hand someone a book to read (shared borrow), or hand it to them to write in (mutable borrow), but you can’t let two people write in it at the same time. And if you give the book away (move), you don’t have it anymore.
Coming from TypeScript where everything is a reference and mutation is unrestricted, this feels limiting at first. After a while, it starts feeling like a superpower — the compiler enforces discipline that you’d otherwise need extensive testing and code review to maintain.
Comparing HTTP Servers #
Let’s look at something concrete: building an HTTP server. This is where most Node developers start with Rust.
Express / Hono (TypeScript) #
import { Hono } from 'hono'
interface User {
id: number
name: string
email: string
}
const users: User[] = [
{ id: 1, name: 'Zevs', email: 'zevs@example.com' },
]
const app = new Hono()
app.get('/users', (c) => {
return c.json(users)
})
app.get('/users/:id', (c) => {
const id = Number(c.req.param('id'))
const user = users.find(u => u.id === id)
if (!user) {
return c.json({ error: 'User not found' }, 404)
}
return c.json(user)
})
app.post('/users', async (c) => {
const body = await c.req.json<Omit<User, 'id'>>()
const user: User = {
id: users.length + 1,
...body,
}
users.push(user)
return c.json(user, 201)
})
export default appStraightforward. Parse params, find data, return JSON. The TypeScript types give us editor support but no runtime guarantees — c.req.json<Omit<User, 'id'>>() doesn’t actually validate the body at runtime.
Axum (Rust) #
use axum::{
extract::{Path, State, Json},
http::StatusCode,
response::IntoResponse,
routing::{get, post},
Router,
};
use serde::{Deserialize, Serialize};
use std::sync::{Arc, Mutex};
#[derive(Clone, Serialize, Deserialize)]
struct User {
id: u32,
name: String,
email: String,
}
#[derive(Deserialize)]
struct CreateUser {
name: String,
email: String,
}
type AppState = Arc<Mutex<Vec<User>>>;
async fn list_users(
State(users): State<AppState>,
) -> Json<Vec<User>> {
let users = users.lock().unwrap();
Json(users.clone())
}
async fn get_user(
State(users): State<AppState>,
Path(id): Path<u32>,
) -> impl IntoResponse {
let users = users.lock().unwrap();
match users.iter().find(|u| u.id == id) {
Some(user) => Ok(Json(user.clone())),
None => Err(StatusCode::NOT_FOUND),
}
}
async fn create_user(
State(users): State<AppState>,
Json(input): Json<CreateUser>,
) -> impl IntoResponse {
let mut users = users.lock().unwrap();
let user = User {
id: users.len() as u32 + 1,
name: input.name,
email: input.email,
};
users.push(user.clone());
(StatusCode::CREATED, Json(user))
}
#[tokio::main]
async fn main() {
let state: AppState = Arc::new(Mutex::new(vec![
User {
id: 1,
name: "Zevs".into(),
email: "zevs@example.com".into(),
},
]));
let app = Router::new()
.route("/users", get(list_users).post(create_user))
.route("/users/{id}", get(get_user))
.with_state(state);
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000")
.await
.unwrap();
axum::serve(listener, app).await.unwrap();
}More verbose, yes. But look at what you get for free:
- Thread-safe state —
Arc<Mutex<Vec<User>>>is enforced by the compiler. You literally cannot share mutable state between handlers without proving it’s safe. - Deserialization with validation —
Json<CreateUser>automatically deserializes and validates the request body. If a field is missing, Axum returns a 422 before your handler even runs. - Type-safe path extraction —
Path(id): Path<u32>extracts and parses the path parameter. If someone sends/users/abc, it returns a 400 automatically. - Exhaustive pattern matching — the
matchinget_userforces you to handle both the found and not-found cases. You can’t forget.
The Rust version is longer, but it handles edge cases that the TypeScript version silently ignores.
Error Handling: try/catch vs Result #
In JavaScript, errors are thrown and (hopefully) caught:
async function fetchUser(id: number): Promise<User> {
const res = await fetch(`/api/users/${id}`)
if (!res.ok) {
throw new Error(`HTTP ${res.status}`)
}
const data = await res.json()
return data as User
}
// Caller must remember to try/catch
try {
const user = await fetchUser(1)
console.log(user.name)
}
catch (err) {
console.error('Failed:', err)
}The problem: nothing forces the caller to handle the error. Forget the try/catch, and the error propagates silently until it crashes your process or gets swallowed by a generic handler.
In Rust, errors are values, not exceptions:
use reqwest;
use serde::Deserialize;
#[derive(Deserialize)]
struct User {
name: String,
}
async fn fetch_user(id: u32) -> Result<User, reqwest::Error> {
let user = reqwest::get(format!("http://api/users/{id}"))
.await?
.error_for_status()?
.json::<User>()
.await?;
Ok(user)
}
// Caller MUST handle the Result — the compiler enforces it
async fn main_logic() {
match fetch_user(1).await {
Ok(user) => println!("{}", user.name),
Err(e) => eprintln!("Failed: {e}"),
}
}The ? operator is Rust’s equivalent of await for errors — it propagates the error to the caller if it fails, or unwraps the success value if it succeeds. But unlike exceptions, the return type Result<User, reqwest::Error> makes it explicit that this function can fail. The compiler won’t let you ignore it.
This was one of the things that immediately made me a better programmer. In TypeScript, I had to rely on discipline and code review to ensure errors were handled. In Rust, the type system does it for me.
Custom Error Types #
In real applications, you usually define your own error types. With the thiserror crate, this is clean:
use thiserror::Error;
#[derive(Error, Debug)]
enum AppError {
#[error("user {0} not found")]
NotFound(u32),
#[error("database error: {0}")]
Database(#[from] sqlx::Error),
#[error("request failed: {0}")]
Request(#[from] reqwest::Error),
#[error("unauthorized")]
Unauthorized,
}Every error variant is typed, carries context, and converts automatically from library errors via #[from]. Pattern matching on these errors gives you fine-grained control:
match do_something().await {
Ok(result) => handle_success(result),
Err(AppError::NotFound(id)) => return not_found(id),
Err(AppError::Unauthorized) => return redirect_to_login(),
Err(e) => {
tracing::error!("unexpected error: {e}");
return internal_error();
}
}Compare this to JavaScript where catch (err) gives you an unknown type and you’re left guessing what went wrong with instanceof checks.
Async: Promises vs Futures #
Both Node and Rust use async/await, but the underlying mechanics are fundamentally different.
In JavaScript, Promises are eager — they start executing immediately when created:
const promise = fetchUser(1) // Already running!
// ... do other stuff ...
const user = await promise // Wait for it to finishIn Rust, Futures are lazy — they do nothing until polled:
let future = fetch_user(1); // Nothing happens yet
// ... do other stuff ...
let user = future.await; // NOW it starts executingThis laziness is actually an advantage. It means you can compose futures without accidentally triggering side effects. You build up a computation graph, and nothing runs until you explicitly drive it.
Concurrency Patterns #
Running tasks concurrently in JavaScript:
// Run in parallel
const [users, posts] = await Promise.all([
fetchUsers(),
fetchPosts(),
])
// Race — first one wins
const result = await Promise.race([
fetchFromPrimary(),
fetchFromFallback(),
])In Rust with Tokio:
// Run in parallel
let (users, posts) = tokio::join!(
fetch_users(),
fetch_posts(),
);
// Race — first one wins
tokio::select! {
result = fetch_from_primary() => handle(result),
result = fetch_from_fallback() => handle(result),
}Similar syntax, but Rust’s tokio::select! is more powerful — it cancels the losing future automatically, while JavaScript’s Promise.race leaves the other promise running in the background.
Where Rust really shines is CPU-bound concurrency. In Node, you’d need worker threads:
// Node.js — worker threads for CPU work (cumbersome)
const { Worker } = require('node:worker_threads')
const worker = new Worker('./heavy-computation.js', {
workerData: input,
})
worker.on('message', (result) => { /* ... */ })In Rust, you just spawn a task on a thread pool:
// Rust — spawn blocking work on a thread pool
let result = tokio::task::spawn_blocking(move || {
heavy_computation(input)
}).await?;Or use rayon for data parallelism that automatically scales across all CPU cores:
use rayon::prelude::*;
// Process millions of items across all cores
let results: Vec<Output> = inputs
.par_iter()
.map(|item| process(item))
.collect();There’s no JavaScript equivalent that’s this simple and this fast.
The Type System #
TypeScript’s type system is structural and exists only at compile time — it’s erased at runtime. Rust’s type system is nominal and exists at every level, from compile time to the actual memory layout.
Enums: Rust’s Secret Weapon #
TypeScript has union types. Rust has algebraic data types (enums with data). This is, hands down, the feature I miss most when I go back to TypeScript.
// TypeScript: discriminated union
type ApiResponse
= | { status: 'success', data: User }
| { status: 'error', message: string }
| { status: 'loading' }
function handle(response: ApiResponse) {
switch (response.status) {
case 'success':
console.log(response.data.name)
break
case 'error':
console.error(response.message)
break
case 'loading':
console.log('Loading...')
break
}
}// Rust: enum with data
enum ApiResponse {
Success { data: User },
Error { message: String },
Loading,
}
fn handle(response: ApiResponse) {
match response {
ApiResponse::Success { data } => println!("{}", data.name),
ApiResponse::Error { message } => eprintln!("{message}"),
ApiResponse::Loading => println!("Loading..."),
}
}They look similar. But here’s the difference: if you add a new variant to the Rust enum, the compiler immediately flags every match that doesn’t handle it. In TypeScript, switch on a discriminated union won’t warn you about missing cases by default (you need exhaustive-deps lint rules, and even then it’s not bulletproof).
Rust enums are also how Option<T> and Result<T, E> work — they’re not special language features, they’re just enums:
enum Option<T> {
Some(T),
None,
}
enum Result<T, E> {
Ok(T),
Err(E),
}This means null and errors are handled through the same pattern matching system as everything else. No special syntax, no special rules. It’s elegant.
Traits vs Interfaces #
TypeScript interfaces define a shape. Rust traits define behavior.
// TypeScript: interface
interface Serializable {
serialize: () => string
}
class User implements Serializable {
serialize(): string {
return JSON.stringify(this)
}
}// Rust: trait
trait Serializable {
fn serialize(&self) -> String;
}
impl Serializable for User {
fn serialize(&self) -> String {
serde_json::to_string(self).unwrap()
}
}The key difference: in Rust, you can implement traits for types you don’t own. I can implement Serializable for String or Vec<u8> or any third-party type. In TypeScript, you can’t retroactively make a class implement an interface without modifying it.
This extensibility is what makes Rust’s ecosystem so composable. The serde crate (Rust’s de facto serialization framework) works by implementing Serialize and Deserialize traits — and you can add support for any data format or any type without modifying either.
The Ecosystem: npm vs Cargo #
| npm | Cargo | |
|---|---|---|
| Package registry | npmjs.com | crates.io |
| Lock file | package-lock.json / pnpm-lock.yaml | Cargo.lock |
| Monorepo support | workspaces | workspaces |
| Build scripts | package.json scripts | build.rs |
| Testing | External (vitest, jest) | Built-in (cargo test) |
| Formatting | External (prettier) | Built-in (cargo fmt) |
| Linting | External (eslint) | Built-in (cargo clippy) |
| Docs | External (typedoc) | Built-in (cargo doc) |
Cargo is batteries-included in a way that npm isn’t. Testing, formatting, linting, documentation generation — all built into the toolchain. No arguing about which test runner to use, no configuring formatters, no installing extra dev dependencies for basic workflows.
The crates I use most frequently:
- tokio — async runtime, the foundation everything else builds on
- axum — HTTP framework by the Tokio team
- serde + serde_json — serialization/deserialization
- sqlx — async database driver with compile-time SQL checking
- reqwest — HTTP client
- tracing — structured logging and diagnostics
- thiserror / anyhow — error handling
- clap — CLI argument parsing
The ecosystem is smaller than npm, but the quality bar is noticeably higher. Fewer packages to choose from, but less decision fatigue and fewer abandoned dependencies.
Performance: Real Numbers #
I ran a simple benchmark on an identical API (JSON serialization, database query, response) on the same hardware:
| Hono (Bun) | Axum (Rust) | |
|---|---|---|
| Requests/sec | ~48,000 | ~210,000 |
| p99 latency | 4.2ms | 0.3ms |
| Memory usage | ~85MB | ~8MB |
| Cold start | ~120ms | ~3ms |
Rust isn’t just faster — it’s a different class of performance. The 10x lower memory usage means I can run more services on the same hardware. The sub-millisecond p99 latency means tail latency doesn’t exist. The instant cold start means serverless deployments are truly instant.
But these numbers only matter for specific workloads. For a blog, a CRUD app, or a prototype — the difference is irrelevant. Node is fast enough, and you ship weeks sooner.
When I Use Which #
After working with both for a while, I’ve developed a simple decision framework:
I use Node.js / TypeScript when:
- Rapid prototyping and MVPs — nothing beats the iteration speed.
- Frontend + full-stack web apps — Nuxt, Astro, and the ecosystem are unmatched.
- Scripting and tooling — quick scripts, CLIs, build tools.
- The team is JavaScript-heavy — hiring and onboarding matter.
I use Rust when:
- High-throughput backend services — APIs handling thousands of RPS.
- Long-running processes — daemons, workers, queue consumers where memory stability matters.
- Data processing pipelines — parsing, transforming, aggregating large datasets.
- Systems-level work — anything touching the network stack, file systems, or needing fine-grained control.
- Performance-critical paths — the hot loop that everything else depends on.
In practice, my projects are often both. A TypeScript frontend with Nuxt, calling a Rust API service via Axum. Or a Node.js orchestration layer that delegates heavy lifting to Rust microservices. They complement each other well.
What I Wish I Knew Earlier #
A few things that would have saved me weeks of frustration:
Don’t fight the borrow checker. When the compiler rejects your code, it’s usually pointing at a design issue, not a syntax issue. Instead of adding .clone() everywhere to make it compile, step back and think about who should own the data. The borrow checker is teaching you better architecture.
Start with String, not &str. Coming from JavaScript, you’ll want to use owned types (String, Vec<T>, HashMap) everywhere at first. That’s fine. You can optimize to borrowed types (&str, &[T]) later when you understand lifetimes better. Premature optimization of lifetimes is the root of all borrow checker frustration.
Use anyhow for applications, thiserror for libraries. anyhow::Result gives you ergonomic error handling without defining error types upfront — perfect for application code. thiserror gives you structured, typed errors — perfect for library code where callers need to match on error variants.
Read the clippy lints. cargo clippy isn’t just a linter — it’s a Rust teacher. Every suggestion comes with an explanation of why the alternative is better. Running clippy on my early Rust code was humbling but incredibly educational.
The Rust community is genuinely helpful. The Rust Users forum, the subreddit, and Discord are some of the most welcoming programming communities I’ve encountered. Don’t hesitate to ask questions — everyone remembers fighting the borrow checker for the first time.
Conclusion #
Learning Rust made me a better TypeScript developer. The concepts of ownership, explicit error handling, and thinking about data lifetimes — these are transferable ideas that improved how I write code in any language. I now think more carefully about who owns data, where mutations happen, and what errors a function can produce, even when writing JavaScript.
Rust isn’t a replacement for Node.js in my workflow. It’s an addition. A powerful tool for the problems where Node struggles — and there are more of those problems than I initially thought.
If you’re a Node.js developer curious about Rust, my advice: start with a small project. Rewrite a CLI tool you already have, or build a simple API with Axum. The first week will be painful. By the third week, you’ll start to feel the compiler working with you instead of against you. And by the first month, you’ll wonder how you ever wrote concurrent code without the borrow checker watching your back.
You can find my full tech stack here.
Thanks for reading!