Integer overflow (CWE-190) is one of those bugs that I find endlessly fascinating because of how quietly destructive it is. It happens when an arithmetic operation produces a value that exceeds the maximum (or falls below the minimum) representable value for the integer type. In C and C++, signed integer overflow is undefined behaviour, the compiler is free to assume it never happens, and optimizations built on that assumption can eliminate bounds checks entirely. Unsigned overflow wraps around silently. Go and Java define overflow as wrapping (two’s complement), which prevents undefined behaviour but still produces incorrect results that lead to security vulnerabilities: undersized allocations, bypassed length checks, and negative indices into arrays. Rust panics on overflow in debug mode but wraps in release mode by default, creating a gap between testing and production behaviour that caught me off guard when I first started digging into Rust’s safety guarantees. I want to walk through integer overflow across C, C++, Rust, Go, and Java, from the textbook multiplication overflow to the subtle cast truncation that can survive expert review.

Why Integer Overflow Matters for Security

Integer overflow rarely crashes a program directly. What I find interesting is how it corrupts a value that is used downstream in a security-critical decision:

  1. Buffer allocation: A size calculation overflows to a small value. The program allocates a tiny buffer, then copies the original (large) amount of data into it, heap buffer overflow.
  2. Length validation: A bounds check uses the overflowed value. The check passes because the wrapped value is small, but the actual data is large.
  3. Financial calculations: An amount wraps around, turning a large debit into a credit or a small payment into a massive one.
  4. Array indexing: A signed overflow produces a negative value used as an array index, reading or writing out of bounds.
  5. Loop termination: A counter overflows and wraps to zero, creating an infinite loop or skipping loop body execution entirely.

The danger is amplified because integer overflow is silent in most languages. There is no exception, no error code, no log entry. The program continues with a wrong value, and the consequences appear far from the arithmetic that caused them. The more I researched real-world CVEs involving overflow, the more I noticed a pattern: the root cause is often an overflow three or four function calls away from the symptom, which makes debugging these incredibly frustrating.

The Easy-to-Spot Version

C: Multiplication Overflow in Allocation Size

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>

void process_items(uint32_t count, size_t item_size, const void *data) {
    size_t total = count * item_size;
    char *buffer = malloc(total);
    if (!buffer) {
        fprintf(stderr, "Allocation failed\n");
        return;
    }
    memcpy(buffer, data, count * item_size);
    printf("Processed %u items (%zu bytes)\n", count, total);
    free(buffer);
}

If count is 0x10000 and item_size is 0x10000 on a 32-bit system (or with 32-bit intermediate arithmetic), the multiplication count * item_size overflows to 0. malloc(0) returns a valid pointer to a zero-size allocation (or NULL, implementation-defined). The subsequent memcpy copies count * item_size bytes, which also overflows to 0 in the malloc call but the actual data size the caller intended is 4 GB. The result is a heap buffer overflow of catastrophic proportions.

Every SAST tool flags unchecked multiplication before malloc. The fix is to check for overflow before the allocation. This is essentially the “hello world” of integer overflow bugs, if your tooling isn’t catching this one, something needs attention.

Java: Signed Integer Wrapping in Array Size

import java.util.Arrays;

public class DataProcessor {
    public static byte[] createBuffer(int width, int height, int bytesPerPixel) {
        int size = width * height * bytesPerPixel;
        if (size <= 0) {
            throw new IllegalArgumentException("Invalid dimensions");
        }
        byte[] buffer = new byte[size];
        return buffer;
    }

    public static void main(String[] args) {
        int width = 65536;
        int height = 65536;
        int bytesPerPixel = 4;
        byte[] buf = createBuffer(width, height, bytesPerPixel);
        System.out.println("Buffer size: " + buf.length);
    }
}

Java’s int is a signed 32-bit integer. 65536 * 65536 * 4 overflows the 32-bit range. The result wraps to 0 (or a small positive/negative number depending on the exact values). The size <= 0 check catches negative results but not small positive ones. If the overflow produces a small positive value, the allocation succeeds with a tiny buffer, and subsequent writes based on the original dimensions overflow it. Java throws NegativeArraySizeException for negative sizes, but a small positive overflow slips through. This is the kind of thing that passes code review because the <= 0 check looks “good enough”, it took me a while to internalize why it isn’t.

The Hard-to-Spot Version

C++: Implicit Narrowing in Size Calculation

#include <vector>
#include <cstdint>
#include <iostream>
#include <cstring>

class PacketParser {
    std::vector<uint8_t> buffer_;

public:
    void parseHeader(const uint8_t* data, size_t length) {
        if (length < 4) return;

        uint16_t payload_count = (data[0] << 8) | data[1];
        uint16_t payload_size  = (data[2] << 8) | data[3];

        size_t total = payload_count * payload_size;
        buffer_.resize(total);

        if (length >= 4 + total) {
            std::memcpy(buffer_.data(), data + 4, total);
        }
    }

    void processPayloads(uint16_t payload_count, uint16_t payload_size) {
        for (uint16_t i = 0; i < payload_count; i++) {
            size_t offset = i * payload_size;
            if (offset + payload_size <= buffer_.size()) {
                processOne(buffer_.data() + offset, payload_size);
            }
        }
    }

private:
    void processOne(const uint8_t* data, size_t size) {
        std::cout << "Processing " << size << " bytes" << std::endl;
    }
};

Here’s what clicked for me when I was studying this pattern: the multiplication payload_count * payload_size is performed in uint16_t arithmetic (both operands are uint16_t). If payload_count = 1000 and payload_size = 100, the product is 100,000, which overflows uint16_t (max 65,535) and wraps to 34,464. The buffer_ is resized to 34,464 bytes, but the actual data is 100,000 bytes. The memcpy writes 34,464 bytes (the truncated total), so it does not overflow, but processPayloads iterates over 1,000 payloads of 100 bytes each, reading past the buffer. The mismatch between the truncated allocation and the original iteration count creates an out-of-bounds read.

The subtlety here is that C++ performs arithmetic in the type of the operands. Both are uint16_t, so the multiplication stays in 16-bit arithmetic. Promoting one operand to size_t before the multiplication fixes the issue, but the implicit narrowing is invisible in review. Reading through CVE reports on network protocol parsers, this pattern comes up repeatedly, it only manifests when someone sends a packet with just the right field values.

Go: Silent Wrapping in Slice Capacity Calculation

package main

import (
	"encoding/binary"
	"fmt"
)

func decodeRecords(data []byte) ([][]byte, error) {
	if len(data) < 8 {
		return nil, fmt.Errorf("data too short")
	}

	recordCount := int32(binary.BigEndian.Uint32(data[0:4]))
	recordSize := int32(binary.BigEndian.Uint32(data[4:8]))

	if recordCount <= 0 || recordSize <= 0 {
		return nil, fmt.Errorf("invalid record parameters")
	}

	totalSize := recordCount * recordSize
	if totalSize <= 0 {
		return nil, fmt.Errorf("invalid total size")
	}

	if int(totalSize) > len(data)-8 {
		return nil, fmt.Errorf("data too short for records")
	}

	records := make([][]byte, recordCount)
	for i := int32(0); i < recordCount; i++ {
		start := 8 + int(i*recordSize)
		end := start + int(recordSize)
		records[i] = data[start:end]
	}
	return records, nil
}

func main() {
	data := make([]byte, 1024)
	binary.BigEndian.PutUint32(data[0:4], 100000)
	binary.BigEndian.PutUint32(data[4:8], 100000)
	records, err := decodeRecords(data)
	if err != nil {
		fmt.Println("Error:", err)
		return
	}
	fmt.Println("Decoded", len(records), "records")
}

Go’s int32 multiplication wraps silently. 100000 * 100000 overflows int32 and wraps to a value that may be small and positive. The totalSize <= 0 check catches negative wraps but not small positive ones. If the wrapped value passes the check and is less than len(data)-8, the program proceeds to slice data using the original recordCount and recordSize, which are far larger than the actual data, causing a runtime panic on slice bounds. In a server context, this panic crashes the goroutine handling the request.

What surprised me when I started looking into this is how often Go developers assume “Go handles integers sanely.” It does, in the sense that it doesn’t invoke undefined behaviour, but wrapping silently is still a bug. The fix is to perform the multiplication in 64-bit arithmetic and check against a reasonable maximum before proceeding.

Rust: Release-Mode Wrapping

use std::io::{self, Read};

struct FrameDecoder {
    max_frame_size: u32,
}

impl FrameDecoder {
    fn new(max_size: u32) -> Self {
        FrameDecoder { max_frame_size: max_size }
    }

    fn decode_frame(&self, header: &[u8]) -> Result<Vec<u8>, String> {
        if header.len() < 8 {
            return Err("Header too short".into());
        }

        let width = u32::from_be_bytes(header[0..4].try_into().unwrap());
        let height = u32::from_be_bytes(header[4..8].try_into().unwrap());

        let pixel_count = width * height;
        let frame_size = pixel_count * 4; // 4 bytes per pixel (RGBA)

        if frame_size > self.max_frame_size {
            return Err("Frame too large".into());
        }

        let mut frame = vec![0u8; frame_size as usize];
        Ok(frame)
    }
}

fn main() {
    let decoder = FrameDecoder::new(10_000_000);
    let header = [
        0x00, 0x01, 0x00, 0x00, // width = 65536
        0x00, 0x01, 0x00, 0x00, // height = 65536
    ];
    match decoder.decode_frame(&header) {
        Ok(frame) => println!("Frame size: {}", frame.len()),
        Err(e) => println!("Error: {}", e),
    }
}

In debug mode, width * height panics because 65536 * 65536 overflows u32. But in release mode (cargo build --release), Rust wraps the overflow silently. The product wraps to 0, frame_size becomes 0, the size check passes (0 < 10,000,000), and the function returns an empty Vec. Downstream code that expects a frame of width * height * 4 bytes gets an empty buffer, leading to out-of-bounds access or incorrect rendering.

This is a well-known Rust footgun that I think deserves more attention: developers test in debug mode where overflow panics, then deploy in release mode where it wraps. The behaviour difference between debug and release is documented, but it’s the kind of thing that catches you off guard until you’ve been burned by it. The Rust community discusses this trade-off regularly, and enabling overflow-checks = true in release profiles is worth considering for any security-sensitive code.

Java: Truncation in Type Cast

public class TransferService {
    public static boolean validateTransfer(long amount, long balance) {
        int transferAmount = (int) amount;
        if (transferAmount <= 0) {
            System.out.println("Invalid transfer amount");
            return false;
        }
        if (transferAmount > balance) {
            System.out.println("Insufficient funds");
            return false;
        }
        return true;
    }

    public static void main(String[] args) {
        long amount = 2_147_483_648L; // Integer.MAX_VALUE + 1
        long balance = 1000;
        if (validateTransfer(amount, balance)) {
            System.out.println("Transfer approved for: " + amount);
        }
    }
}

The cast (int) amount truncates the long value. 2_147_483_648L (which is Integer.MAX_VALUE + 1) truncates to -2_147_483_648 as an int. The transferAmount <= 0 check catches this specific case, but values like 2_147_484_648L truncate to 1000, which passes both checks, the transfer is approved for a massive amount while the validation thinks it is only 1,000. The narrowing cast silently discards the upper 32 bits.

What I found eye-opening when researching this was a public case where exactly this kind of narrowing cast was buried several layers deep in a utility function. Nobody questioned it because “it had always worked.” It had always worked because nobody had ever sent a value larger than Integer.MAX_VALUE, until someone did. It’s a good reminder that “works in testing” and “correct” are very different things.

Detection Strategies

Static Analysis

Tool Language What It Catches Limitations
GCC -ftrapv C/C++ Runtime trap on signed overflow Performance overhead, unsigned overflow not caught
Clang -fsanitize=integer C/C++ Signed and unsigned overflow at runtime Requires test execution with triggering inputs
UBSan (-fsanitize=undefined) C/C++ Signed overflow, shift overflow, division by zero Runtime only, does not catch unsigned wrapping
cppcheck C/C++ Some integer overflow patterns Limited to simple cases
SpotBugs Java Integer overflow in int arithmetic Limited detection of narrowing casts
clippy Rust Warns about as casts that may truncate Cannot detect overflow in arithmetic expressions
go vet Go Limited integer analysis Does not flag wrapping arithmetic
Semgrep All Pattern matching for unchecked arithmetic before allocation Cannot reason about value ranges

Compiler Flags and Runtime Checks

Flag/Feature Language Effect
-ftrapv C/C++ (GCC) Generates traps for signed overflow
-fsanitize=integer C/C++ (Clang) Comprehensive integer overflow detection
-fwrapv C/C++ (GCC) Defines signed overflow as wrapping (prevents UB but not bugs)
overflow-checks = true Rust (Cargo.toml) Enables overflow panics in release mode
Math.addExact, Math.multiplyExact Java Throws ArithmeticException on overflow
bits.Mul, bits.Add Go (math/bits) Returns carry flag for overflow detection

Manual Review Indicators

  1. Multiplication before malloc/new/make, any size calculation from untrusted input needs overflow checking.
  2. Narrowing casts (long to int, size_t to uint16_t, u64 to u32), the upper bits are silently discarded.
  3. Arithmetic on uint16_t or uint8_t in C/C++, integer promotion rules may or may not apply depending on context.
  4. as casts in Rust, value as u32 truncates without checking. Use try_into() instead.
  5. Signed/unsigned comparison, a negative signed value compared to an unsigned value is implicitly converted, producing a large unsigned value.
  6. Loop counters with user-controlled bounds, if the counter type is smaller than the bound type, it may wrap before reaching the termination condition.

Remediation

C: Checked Multiplication Before Allocation

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <limits.h>

void process_items(uint32_t count, size_t item_size, const void *data) {
    if (item_size != 0 && count > SIZE_MAX / item_size) {
        fprintf(stderr, "Integer overflow in size calculation\n");
        return;
    }
    size_t total = (size_t)count * item_size;
    char *buffer = malloc(total);
    if (!buffer) {
        fprintf(stderr, "Allocation failed\n");
        return;
    }
    memcpy(buffer, data, total);
    printf("Processed %u items (%zu bytes)\n", count, total);
    free(buffer);
}

Check that count <= SIZE_MAX / item_size before multiplying. This is the standard idiom for overflow-safe allocation in C. Defining a safe_mul helper in any codebase that does allocation from untrusted sizes is worth the effort, it saves you from having to remember the pattern every time.

C++: Use <limits> and Widening Arithmetic

#include <vector>
#include <cstdint>
#include <iostream>
#include <cstring>
#include <limits>

class PacketParser {
    std::vector<uint8_t> buffer_;

public:
    bool parseHeader(const uint8_t* data, size_t length) {
        if (length < 4) return false;

        uint16_t payload_count = (data[0] << 8) | data[1];
        uint16_t payload_size  = (data[2] << 8) | data[3];

        // Widen to size_t before multiplication
        size_t total = static_cast<size_t>(payload_count) * static_cast<size_t>(payload_size);

        if (total > std::numeric_limits<size_t>::max() / 2) {
            return false; // Sanity limit
        }

        buffer_.resize(total);
        if (length >= 4 + total) {
            std::memcpy(buffer_.data(), data + 4, total);
        }
        return true;
    }
};

Cast both operands to size_t before the multiplication. The arithmetic is now performed in 64-bit (on 64-bit systems), eliminating the 16-bit overflow. Adding a sanity upper bound is good practice too, even if the math is correct, allocating terabytes of memory is never what you actually want.

Go: Use 64-Bit Arithmetic and Explicit Overflow Check

package main

import (
	"encoding/binary"
	"fmt"
	"math"
)

func decodeRecords(data []byte) ([][]byte, error) {
	if len(data) < 8 {
		return nil, fmt.Errorf("data too short")
	}

	recordCount := int64(binary.BigEndian.Uint32(data[0:4]))
	recordSize := int64(binary.BigEndian.Uint32(data[4:8]))

	if recordCount <= 0 || recordSize <= 0 {
		return nil, fmt.Errorf("invalid record parameters")
	}

	if recordCount > math.MaxInt64/recordSize {
		return nil, fmt.Errorf("integer overflow in size calculation")
	}

	totalSize := recordCount * recordSize
	if totalSize > int64(len(data)-8) {
		return nil, fmt.Errorf("data too short for records")
	}

	records := make([][]byte, recordCount)
	for i := int64(0); i < recordCount; i++ {
		start := 8 + int(i*recordSize)
		end := start + int(recordSize)
		records[i] = data[start:end]
	}
	return records, nil
}

func main() {
	data := make([]byte, 1024)
	binary.BigEndian.PutUint32(data[0:4], 100)
	binary.BigEndian.PutUint32(data[4:8], 8)
	records, err := decodeRecords(data)
	if err != nil {
		fmt.Println("Error:", err)
		return
	}
	fmt.Println("Decoded", len(records), "records")
}

Widen to int64 before arithmetic and check recordCount > math.MaxInt64/recordSize before multiplying. Go’s math/bits package also provides Mul64 which returns the high and low halves of the product, enabling precise overflow detection. Reaching for int64 by default whenever you’re dealing with sizes from untrusted input in Go is a solid habit.

Rust: Use checked_mul and Enable Release Overflow Checks

struct FrameDecoder {
    max_frame_size: u32,
}

impl FrameDecoder {
    fn new(max_size: u32) -> Self {
        FrameDecoder { max_frame_size: max_size }
    }

    fn decode_frame(&self, header: &[u8]) -> Result<Vec<u8>, String> {
        if header.len() < 8 {
            return Err("Header too short".into());
        }

        let width = u32::from_be_bytes(header[0..4].try_into().unwrap());
        let height = u32::from_be_bytes(header[4..8].try_into().unwrap());

        let pixel_count = width.checked_mul(height)
            .ok_or("Overflow in pixel count")?;
        let frame_size = pixel_count.checked_mul(4)
            .ok_or("Overflow in frame size")?;

        if frame_size > self.max_frame_size {
            return Err("Frame too large".into());
        }

        let frame = vec![0u8; frame_size as usize];
        Ok(frame)
    }
}

Use checked_mul, checked_add, and checked_sub for all arithmetic on untrusted input. These return None on overflow, which integrates naturally with Rust’s Option/Result error handling. For project-wide protection, add overflow-checks = true to the [profile.release] section of Cargo.toml. The performance cost of checked arithmetic is negligible for anything that isn’t a tight inner loop, and for security-critical code, it’s a worthwhile trade-off.

Java: Use Math.multiplyExact and Avoid Narrowing Casts

public class TransferService {
    public static boolean validateTransfer(long amount, long balance) {
        if (amount <= 0) {
            System.out.println("Invalid transfer amount");
            return false;
        }
        if (amount > balance) {
            System.out.println("Insufficient funds");
            return false;
        }
        return true;
    }

    public static int safeAllocSize(int width, int height, int bytesPerPixel) {
        try {
            int size = Math.multiplyExact(Math.multiplyExact(width, height), bytesPerPixel);
            if (size <= 0) {
                throw new ArithmeticException("Non-positive size");
            }
            return size;
        } catch (ArithmeticException e) {
            throw new IllegalArgumentException("Dimensions cause integer overflow", e);
        }
    }
}

Math.multiplyExact throws ArithmeticException on overflow instead of wrapping silently. For the transfer validation, keep the parameter as long throughout, never narrow to int. If an int is required downstream, use Math.toIntExact(longValue) which throws on truncation. It’s been available since Java 8 and there’s really no excuse not to use it for anything involving untrusted input, I’m genuinely surprised it isn’t used more widely.

Key Takeaways

  1. Integer overflow is silent in most languages. C/C++ have undefined behaviour for signed overflow. Go, Java, and Rust (release mode) wrap without warning. You must add explicit checks.
  2. Always check before multiplying sizes from untrusted input. The pattern if (a != 0 && b > MAX / a) catches overflow before it happens.
  3. Widen before arithmetic, not after. Cast operands to a larger type before the operation. Casting the result is too late, the overflow already occurred.
  4. Avoid narrowing casts on security-critical values. A long-to-int cast in Java or a u64-to-u32 cast in Rust silently discards bits. Use Math.toIntExact or try_into().
  5. Rust developers: enable overflow-checks in release profiles. The debug/release behaviour difference is a known source of production bugs.
  6. Use language-provided checked arithmetic, checked_mul in Rust, Math.multiplyExact in Java, math/bits in Go. These are designed for exactly this purpose.