I love Rust. I genuinely do. Its ownership system, borrow checker, and type system wipe out entire classes of vulnerabilities at compile time, use-after-free, double-free, data races, null pointer dereferences, buffer overflows. But here’s the thing: Rust gives you an escape hatch called unsafe, and when it’s used incorrectly, it reintroduces every single vulnerability that Rust was designed to prevent. The more I dug into real-world Rust codebases, the more I found this happening. Beyond unsafe, Rust has its own quirky set of security pitfalls: integer overflow behaviour that differs between debug and release builds, FFI boundaries that trust C code unconditionally, and logic errors that the type system simply cannot catch. In this post, I want to walk through the Rust-specific anti-patterns that break the safety promise.

unsafe: The Five Superpowers and Their Risks

The unsafe keyword enables five operations that the compiler cannot verify:

  1. Dereferencing raw pointers (*const T, *mut T)
  2. Calling unsafe functions or methods
  3. Accessing or modifying mutable static variables
  4. Implementing unsafe traits
  5. Accessing fields of union types

Each of these can introduce memory safety violations if the programmer’s invariants are wrong. And from what I’ve seen in code reviews, invariants are wrong more often than anyone wants to admit.

The Easy-to-Spot Version: Raw Pointer Dereference

fn get_element(data: &[u8], index: usize) -> u8 {
    unsafe {
        *data.as_ptr().add(index) // No bounds check
    }
}

fn main() {
    let data = vec![1, 2, 3, 4, 5];
    let value = get_element(&data, 100); // Out-of-bounds read
    println!("Value: {}", value);
}

as_ptr().add(index) performs pointer arithmetic without bounds checking. If index >= data.len(), this reads arbitrary memory. The safe equivalent is data[index], which panics on out-of-bounds access, or data.get(index), which returns Option<&u8>. What I find interesting is how often developers reach for raw pointer arithmetic “for performance” when the optimiser would have eliminated the bounds check anyway. It’s worth benchmarking before reaching for unsafe.

The Hard-to-Spot Version: Incorrect Lifetime in unsafe

This is the kind of bug that fascinates me, because it looks perfectly reasonable at first glance.

use std::slice;

struct Buffer {
    data: Vec<u8>,
}

impl Buffer {
    fn as_slice(&self) -> &[u8] {
        unsafe {
            slice::from_raw_parts(self.data.as_ptr(), self.data.len())
        }
    }

    fn extend(&mut self, extra: &[u8]) {
        self.data.extend_from_slice(extra);
    }
}

fn main() {
    let mut buf = Buffer { data: vec![1, 2, 3] };
    let slice_ref = buf.as_slice(); // Borrows buf immutably
    buf.extend(&[4, 5, 6]);        // Mutates buf, Vec may reallocate
    println!("{:?}", slice_ref);    // Use-after-free: slice_ref points to freed memory
}

Wait, this code actually does not compile. The borrow checker catches the simultaneous immutable borrow (slice_ref) and mutable borrow (extend). Good. But consider a version where the unsafe code returns a raw pointer that outlives the borrow:

impl Buffer {
    fn as_raw_ptr(&self) -> *const u8 {
        self.data.as_ptr()
    }
}

fn main() {
    let mut buf = Buffer { data: vec![1, 2, 3] };
    let ptr = buf.as_raw_ptr(); // Raw pointer, no borrow tracking
    buf.extend(&[4, 5, 6]);    // Vec reallocates, ptr is now dangling
    unsafe {
        let val = *ptr; // Use-after-free
        println!("Value: {}", val);
    }
}

Raw pointers are not tracked by the borrow checker. The pointer ptr becomes dangling when extend causes the Vec to reallocate, but the compiler doesn’t flag this. The unsafe block dereferences a dangling pointer, classic use-after-free. What clicked for me when studying this pattern is how natural it feels when you’re writing it. You think, “I just need a quick pointer,” and suddenly you’ve reintroduced the exact bug class Rust was built to prevent.

Comparison: C’s Equivalent

In C, every pointer is a raw pointer. The entire program is effectively unsafe:

#include <stdlib.h>
#include <string.h>
#include <stdio.h>

int main() {
    char *buf = malloc(3);
    memcpy(buf, "abc", 3);
    char *ptr = buf;
    buf = realloc(buf, 100); // May move the allocation
    printf("%c\n", *ptr);    // Use-after-free if realloc moved the block
    free(buf);
    return 0;
}

The difference is that in Rust, unsafe blocks are auditable, you can grep for unsafe and review every instance. In C, every line of code is potentially unsafe. That’s a massive advantage for Rust, and it’s one of the reasons it’s such a compelling choice for new systems-level projects.

Integer Overflow: Debug vs Release Behaviour

This one surprised me when I first learned about it. Rust panics on integer overflow in debug mode but wraps silently in release mode. This creates a gap between testing and production behaviour that can be genuinely dangerous.

The Vulnerable Pattern

fn calculate_allocation_size(count: u32, item_size: u32) -> usize {
    let total = count * item_size; // Panics in debug, wraps in release
    total as usize
}

fn main() {
    let count: u32 = 70_000;
    let item_size: u32 = 70_000;
    let size = calculate_allocation_size(count, item_size);
    println!("Allocating {} bytes", size);
    let buffer = vec![0u8; size]; // Undersized allocation in release mode
}

70_000 * 70_000 = 4_900_000_000, which overflows u32 (max 4,294,967,295). In debug mode, this panics. In release mode, it wraps to 605,032,704. The allocation is 4.3 GB short of what was intended. Reading through CVE reports, I found this exact class of bug in file processing libraries where everything worked perfectly in tests but silently corrupted data in production. The debug/release split is, to me, one of Rust’s most surprising design decisions.

The Fix: checked_mul

fn calculate_allocation_size(count: u32, item_size: u32) -> Result<usize, String> {
    let total = count.checked_mul(item_size)
        .ok_or("Integer overflow in size calculation")?;
    Ok(total as usize)
}

checked_mul returns None on overflow regardless of build profile. Using checked_add, checked_sub, checked_mul, and checked_div for all arithmetic on untrusted input is the way to go. It’s a little more verbose, but it’s explicit about what happens on overflow, and that explicitness is worth it.

Comparison: Go’s Approach

Go always wraps on integer overflow, there is no debug/release distinction:

// Go: Always wraps, no panic
var count uint32 = 70000
var itemSize uint32 = 70000
total := count * itemSize // Silently wraps to 605032704

Go’s consistency is arguably better than Rust’s split behaviour, but both require explicit overflow checks on security-critical arithmetic. Neither language saves you from thinking about this.

FFI Boundaries: Trusting C Code

Rust’s FFI (Foreign Function Interface) allows calling C functions. Every FFI call is unsafe because the Rust compiler cannot verify the C code’s behaviour. The boundary between safe Rust and C code is where memory safety guarantees break down, and it’s worth understanding exactly how.

The Vulnerable Pattern

use std::ffi::CString;
use std::os::raw::c_char;

extern "C" {
    fn process_input(data: *const c_char, len: usize) -> i32;
}

fn handle_request(input: &str) -> Result<i32, String> {
    let c_input = CString::new(input)
        .map_err(|_| "Input contains null byte")?;
    let result = unsafe {
        process_input(c_input.as_ptr(), input.len())
    };
    Ok(result)
}

The Rust side correctly creates a CString and passes a valid pointer. But the C function process_input might:

  • Write beyond the buffer (buffer overflow)
  • Store the pointer and use it after c_input is dropped (use-after-free)
  • Free the pointer (double-free when Rust drops c_input)
  • Return an error code that the Rust side does not check

The Rust compiler cannot verify any of these. The unsafe block trusts the C code completely. When I started looking into Rust projects with FFI layers, I found cases where the FFI was essentially a thin wrapper around C code with zero validation, all the safety benefits of Rust, thrown away at the boundary.

The Safer Pattern

fn handle_request(input: &str) -> Result<i32, String> {
    if input.len() > MAX_INPUT_SIZE {
        return Err("Input too large".into());
    }
    let c_input = CString::new(input)
        .map_err(|_| "Input contains null byte")?;

    let result = unsafe {
        // SAFETY: process_input reads at most `len` bytes from `data`
        // and does not store the pointer beyond this call.
        // Verified by reviewing process_input source (commit abc123).
        process_input(c_input.as_ptr(), c_input.as_bytes().len())
    };

    if result < 0 {
        return Err(format!("C function failed with code {}", result));
    }
    Ok(result)
}

The // SAFETY: comment documents the invariants that the unsafe block relies on. This is a Rust convention that makes unsafe blocks auditable, and it’s worth treating as mandatory, not optional. If you can’t write the safety comment, you probably don’t understand the invariants well enough to use unsafe.

Comparison: Java’s JNI

Java’s JNI (Java Native Interface) has similar trust boundary issues:

public class NativeProcessor {
    static {
        System.loadLibrary("processor");
    }

    // The native method can corrupt the JVM heap, crash the process,
    // or violate Java's type safety, the JVM cannot prevent it.
    public native int processInput(byte[] data);
}

Both Rust FFI and Java JNI create a trust boundary where the managed language’s safety guarantees end. The difference is that Rust’s unsafe keyword makes the boundary explicit and greppable. That’s a small thing, but it matters a lot during code review.

Logic Errors the Type System Cannot Catch

One thing that’s easy to overlook is that Rust’s type system prevents memory safety bugs, not all bugs. Logic errors in security-critical code are equally possible in Rust, and those are often the ones that matter most.

Incorrect Permission Check

use std::collections::HashMap;

#[derive(Clone, Debug)]
struct User {
    name: String,
    role: String,
}

fn is_authorized(user: &User, resource: &str, permissions: &HashMap<String, Vec<String>>) -> bool {
    if let Some(allowed_resources) = permissions.get(&user.role) {
        return allowed_resources.contains(&resource.to_string());
    }
    true // Default: allow if role not found in permissions map
}

The function returns true (allow) when the role is not found in the permissions map. This is a logic error, the safe default should be false (deny). The Rust compiler cannot catch this because the logic is valid Rust code. The type system ensures memory safety, not authorization correctness. This pattern, default-allow on missing data, shows up across languages and codebases. It feels obviously wrong once you see it, but it slips through review because the code compiles and the happy path works.

Comparison: The Same Bug in C

int is_authorized(const char *role, const char *resource,
                  struct permission_map *perms) {
    struct permission_entry *entry = find_role(perms, role);
    if (entry == NULL) {
        return 1; // Same logic error: allow on missing role
    }
    return has_resource(entry, resource);
}

The logic error is identical in both languages. Rust prevents the memory safety bugs that C would also have (null pointer dereference if find_role returns NULL and is not checked), but the authorization logic is equally wrong. This is why it’s important to remember: Rust makes your code memory-safe, not correct.

Panics in Libraries

Here’s something that I think doesn’t get enough attention: Rust libraries that panic! on error conditions can crash the calling application. In a web server, a panic in a request handler crashes the thread (or the entire process if not caught).

The Vulnerable Pattern

pub fn parse_config(data: &[u8]) -> Config {
    let text = std::str::from_utf8(data).unwrap(); // Panics on invalid UTF-8
    let config: Config = toml::from_str(text).unwrap(); // Panics on invalid TOML
    config
}

If data comes from user input (e.g., a configuration upload endpoint), the attacker sends invalid UTF-8 and crashes the server. Every .unwrap() on user-controlled data is a potential denial-of-service vector. The fix is to return Result instead of panicking:

pub fn parse_config(data: &[u8]) -> Result<Config, ConfigError> {
    let text = std::str::from_utf8(data)
        .map_err(|e| ConfigError::InvalidUtf8(e))?;
    let config: Config = toml::from_str(text)
        .map_err(|e| ConfigError::InvalidToml(e))?;
    Ok(config)
}

Running cargo clippy with the unwrap_used lint enabled is a quick way to find these in any codebase that handles external input.

Detection Strategies

Tool What It Catches Limitations
cargo clippy Unsafe usage patterns, unnecessary unsafe, unwrap() on Result Cannot verify unsafe invariants
cargo audit Known CVEs in dependencies Does not analyse source code
cargo-geiger Counts unsafe usage in dependencies Quantitative only, does not assess correctness
Miri Undefined behaviour in unsafe code (use-after-free, out-of-bounds, data races) Requires test execution; slow; does not cover all UB
Semgrep Pattern-based detection of common Rust anti-patterns Limited Rust rule coverage
#[deny(unsafe_code)] Prevents unsafe in the crate Too restrictive for crates that need FFI or performance-critical code

Manual Review Checklist

  1. Search for unsafe blocks, every instance needs a // SAFETY: comment explaining the invariants.
  2. Search for .unwrap() and .expect(), verify they are not called on user-controlled input.
  3. Search for as casts, value as u32 truncates without checking. Use try_into() instead.
  4. Search for *const and *mut, raw pointers bypass the borrow checker.
  5. Verify overflow-checks = true in [profile.release] in Cargo.toml.
  6. Search for extern "C", every FFI boundary is a trust boundary.
  7. Search for std::mem::transmute, type-punning that bypasses all type safety.

Remediation Patterns

Minimise unsafe Surface Area

// Bad: Large unsafe block
unsafe {
    let ptr = data.as_ptr();
    let len = data.len();
    let slice = slice::from_raw_parts(ptr, len);
    process(slice);
    cleanup(ptr);
}

// Good: Minimal unsafe block with safe wrapper
fn safe_slice(data: &[u8]) -> &[u8] {
    // SAFETY: as_ptr() and len() are consistent for the lifetime of data
    unsafe { slice::from_raw_parts(data.as_ptr(), data.len()) }
}

let slice = safe_slice(&data);
process(slice);
// cleanup is handled by Drop

Enable Release Overflow Checks

# Cargo.toml
[profile.release]
overflow-checks = true

This makes release builds panic on integer overflow, matching debug behaviour. The performance cost is minimal for most applications, and it should be the default for any application that handles untrusted input.

Use newtype Wrappers for Security-Critical Values

struct UserId(u64);
struct AdminToken(String);

fn delete_user(admin: &AdminToken, target: &UserId) -> Result<(), AuthError> {
    verify_admin(admin)?;
    // The type system prevents accidentally passing a UserId where an AdminToken is expected
    do_delete(target)
}

Newtypes prevent mixing up security-critical values (user IDs, tokens, permissions) at compile time. I use this pattern regularly, and it’s caught real bugs during refactoring where I accidentally swapped two arguments of the same underlying type.

Key Takeaways

  1. unsafe reintroduces every bug class Rust prevents. Minimise unsafe blocks, document invariants with // SAFETY: comments, and test with Miri.
  2. Integer overflow differs between debug and release. Enable overflow-checks = true in release profiles, or use checked_* methods on untrusted input.
  3. FFI boundaries are trust boundaries. The Rust compiler cannot verify C code. Validate inputs before FFI calls and check return values after.
  4. .unwrap() on user input is a denial-of-service vulnerability. Return Result from all functions that process external data.
  5. Rust prevents memory bugs, not logic bugs. Authorization errors, incorrect defaults, and business logic flaws are equally possible in Rust as in any other language.
  6. cargo-geiger and cargo audit are essential CI tools. Know how much unsafe your dependency tree contains and whether any dependencies have known CVEs.