<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Technology on guy@secdev.uk</title>
    <link>https://www.secdev.uk/blog/technology/</link>
    <description>Recent content in Technology on guy@secdev.uk</description>
    <generator>Hugo</generator>
    <language>en-gb</language>
    <copyright>Guy Dixon | guy@secdev.uk</copyright>
    <lastBuildDate>Sat, 28 Mar 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://www.secdev.uk/blog/technology/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Vintage Adventures - MOS 6502 - Part 2</title>
      <link>https://www.secdev.uk/blog/technology/2026-03-28-vintage-adventures-6502-part-2/</link>
      <pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-03-28-vintage-adventures-6502-part-2/</guid>
      <description>&lt;p&gt;In Part 1 we covered the MOS 6502&amp;rsquo;s architecture, walked through its instruction set, and decoded a small test program by hand. That gave us the basic structure of a CPU emulator:&lt;/p&gt;&#xA;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;while running:&#xA;    opcode = memory[PC]&#xA;    instruction = decode(opcode)&#xA;    instruction.execute(operands)&#xA;    PC += instruction.length&#xA;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The pseudocode is clean, but there are obvious pieces missing. We need to define the decode function, implement the execution logic for each instruction, and emulate both the memory and the registers. Time to turn that sketch into real code.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Vintage Adventures - MOS 6502 - Part 1</title>
      <link>https://www.secdev.uk/blog/technology/2026-03-21-vintage-adventures-6502-part-1/</link>
      <pubDate>Sat, 21 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-03-21-vintage-adventures-6502-part-1/</guid>
      <description>&lt;p&gt;This next set of posts are a bit of a distraction from security themes articles, and we&amp;rsquo;ll explore some vintage computer hardware.&lt;/p&gt;&#xA;&lt;p&gt;The MOS 6502 is a classic CPU that drove the home computer revolution in the late 1970s and early 1980s. Along with the Zilog Z80, it brought computing to the masses. The 6502 powered some of the most iconic machines of the era, the Apple II, the Commodore 64, the Atari 2600, and the British-built BBC Micro, among others. It even found its way into the original Nintendo Entertainment System (as the Ricoh 2A03, a modified 6502).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Memory Safety Without Rust: Defensive C and C&#43;&#43; Patterns</title>
      <link>https://www.secdev.uk/blog/technology/2026-03-14-memory-safety-without-rust/</link>
      <pubDate>Sat, 14 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-03-14-memory-safety-without-rust/</guid>
      <description>&lt;p&gt;I hear &amp;ldquo;just rewrite it in Rust&amp;rdquo; a lot these days, and while Rust&amp;rsquo;s ownership model genuinely does eliminate entire classes of memory safety bugs at compile time, that advice ignores reality. The vast majority of systems code &amp;ndash; operating systems, embedded firmware, database engines, network stacks &amp;ndash; is written in C and C++ and will remain so for decades. Rewriting is not always an option. So I wanted to dig into the defensive patterns, compiler features, and runtime tools that bring memory safety closer to C and C++ codebases without a language migration. What I found is that while none of these approaches match Rust&amp;rsquo;s compile-time guarantees, the combination of them makes a real difference.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deserialization Attacks: From Pickle to ObjectInputStream</title>
      <link>https://www.secdev.uk/blog/technology/2026-02-28-deserialization-attacks/</link>
      <pubDate>Sat, 28 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-02-28-deserialization-attacks/</guid>
      <description>&lt;p&gt;Deserialization vulnerabilities are some of the scariest bugs in application security, because when they&amp;rsquo;re exploitable, it&amp;rsquo;s almost always remote code execution. The core problem is that an application reconstructs objects from untrusted data without validating what types are being instantiated. In languages with powerful serialization mechanisms &amp;ndash; Python&amp;rsquo;s &lt;code&gt;pickle&lt;/code&gt;, Java&amp;rsquo;s &lt;code&gt;ObjectInputStream&lt;/code&gt;, PHP&amp;rsquo;s &lt;code&gt;unserialize&lt;/code&gt; &amp;ndash; an attacker can craft serialized payloads that execute arbitrary code during the deserialization process itself. The more I researched how these attacks work across languages, the more I appreciated how a single API call can turn into a full server compromise.&lt;/p&gt;</description>
    </item>
    <item>
      <title>CORS Misconfiguration: The Open Door You Didn&#39;t Know About</title>
      <link>https://www.secdev.uk/blog/technology/2026-02-14-cors-misconfiguration/</link>
      <pubDate>Sat, 14 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-02-14-cors-misconfiguration/</guid>
      <description>&lt;p&gt;CORS misconfiguration is one of those vulnerabilities that keeps coming up because most developers don&amp;rsquo;t fully understand what CORS actually does. It&amp;rsquo;s the browser mechanism that controls which websites can make requests to your API. When it&amp;rsquo;s configured correctly, it prevents malicious sites from stealing data through a victim&amp;rsquo;s browser. When it&amp;rsquo;s misconfigured, and this happens constantly based on public bug bounty reports, it effectively disables the Same-Origin Policy, letting any website read authenticated responses from your API. What makes CORS misconfigurations particularly interesting to study is that they&amp;rsquo;re invisible to users, silent in server logs, and trivial to exploit.&lt;/p&gt;</description>
    </item>
    <item>
      <title>XXE Attacks: XML Parsing Gone Wrong</title>
      <link>https://www.secdev.uk/blog/technology/2026-01-31-xxe-attacks/</link>
      <pubDate>Sat, 31 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-01-31-xxe-attacks/</guid>
      <description>&lt;p&gt;XML External Entity injection is one of those vulnerabilities that fascinated me the more I dug into it. The core issue is that the XML spec supports external entities, a feature that lets XML documents pull in content from external sources, and most parsers enable this by default. When an app parses untrusted XML without disabling that feature, an attacker can read arbitrary files off the server, perform SSRF, and sometimes even get remote code execution. What surprised me most when researching this was how straightforward the exploitation is compared to how long these bugs survive in production, the attack payloads are simple, but the parser defaults are so permissive that developers often have no idea the risk exists.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Secrets in Source Code: Finding and Eliminating Hardcoded Credentials</title>
      <link>https://www.secdev.uk/blog/technology/2026-01-17-secrets-in-source-code/</link>
      <pubDate>Sat, 17 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-01-17-secrets-in-source-code/</guid>
      <description>&lt;p&gt;Hardcoded credentials are one of the most common and most preventable vulnerability classes out there. API keys, database passwords, encryption keys, and service tokens embedded directly in source code end up in version control, build artifacts, container images, and log files. Once a secret reaches a Git repository, it persists in the history even after the offending line is deleted. When I started researching how often this happens in practice, the numbers were staggering, public reports of leaked credentials on GitHub alone run into the millions per year. In this post I&amp;rsquo;ll cover the patterns that lead to hardcoded secrets, the tools that detect them, and the architecture changes that eliminate them for good.&lt;/p&gt;</description>
    </item>
    <item>
      <title>String Formatting and Security: A Cross-Language Minefield</title>
      <link>https://www.secdev.uk/blog/technology/2026-01-03-string-formatting-and-security/</link>
      <pubDate>Sat, 03 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-01-03-string-formatting-and-security/</guid>
      <description>&lt;p&gt;String formatting is one of those operations that&amp;rsquo;s everywhere, and it&amp;rsquo;s more dangerous than most developers realise when user input gets involved. Every language provides multiple ways to build strings from dynamic data, and each mechanism carries different security implications. From C&amp;rsquo;s &lt;code&gt;printf&lt;/code&gt; family, where a format string bug can read and write arbitrary memory, to Python&amp;rsquo;s f-strings that can execute attribute lookups, the attack surface is broader than most people think. I wanted to map out the full landscape across languages, and what I found was that each mechanism breaks down in its own unique and sometimes surprising way.&lt;/p&gt;</description>
    </item>
    <item>
      <title>SAST Tools Compared: What They Catch and What They Miss</title>
      <link>https://www.secdev.uk/blog/technology/2025-12-20-sast-tools-compared/</link>
      <pubDate>Sat, 20 Dec 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-12-20-sast-tools-compared/</guid>
      <description>&lt;p&gt;Static Application Security Testing (SAST) tools are the first line of automated defence against vulnerabilities in source code. They analyse code without executing it, looking for patterns that match known vulnerability classes. But here&amp;rsquo;s the thing, no single tool catches everything, and the differences between tools in detection capability, false positive rates, and language support are significant. I wanted to understand exactly where the gaps are, so I spent time running these tools against intentionally vulnerable code and comparing their output. This post is my honest assessment of what they actually catch, what they miss, and where manual review has to pick up the slack.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Art of the Subtle Bug: Nuanced Vulnerabilities That Evade Review</title>
      <link>https://www.secdev.uk/blog/technology/2025-12-06-the-art-of-the-subtle-bug/</link>
      <pubDate>Sat, 06 Dec 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-12-06-the-art-of-the-subtle-bug/</guid>
      <description>&lt;p&gt;The vulnerabilities that cause real breaches are rarely the textbook examples. They&amp;rsquo;re the ones that survive multiple rounds of code review, pass SAST scans, and sit in production for years. The more I researched these nuanced bugs, the more I realised what makes them dangerous: they exploit assumptions reviewers make about language behaviour, framework internals, or data flow boundaries. This post dissects the patterns that make a vulnerability subtle and walks through real examples that show why even experienced reviewers still miss them.&lt;/p&gt;</description>
    </item>
    <item>
      <title>JavaScript Security: Prototype Pollution to Supply Chain Attacks</title>
      <link>https://www.secdev.uk/blog/technology/2025-11-22-javascript-security-prototype-pollution/</link>
      <pubDate>Sat, 22 Nov 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-11-22-javascript-security-prototype-pollution/</guid>
      <description>&lt;p&gt;JavaScript is the one language I can never escape, it&amp;rsquo;s on both sides of the web. In the browser it handles user interaction and DOM manipulation, and on the server Node.js powers APIs, microservices, and build tools. This dual nature creates an attack surface that&amp;rsquo;s uniquely challenging to secure. Browser-side JavaScript faces XSS, DOM clobbering, and postMessage abuse. Server-side JavaScript faces prototype pollution, dependency confusion, ReDoS, and the vast npm ecosystem where a single malicious package can compromise thousands of applications. In this post, I want to walk through the JavaScript-specific anti-patterns that keep coming up, from the prototype chain manipulation that poisons every object in the runtime to the regex that freezes your server.&lt;/p&gt;</description>
    </item>
    <item>
      <title>C&#43;&#43; Security: Smart Pointers Aren&#39;t Always Smart Enough</title>
      <link>https://www.secdev.uk/blog/technology/2025-11-08-cpp-security-smart-pointers/</link>
      <pubDate>Sat, 08 Nov 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-11-08-cpp-security-smart-pointers/</guid>
      <description>&lt;p&gt;The more I dug into C++ codebases, the more I noticed a recurring assumption: developers who think that switching to smart pointers and STL containers means they&amp;rsquo;re safe from memory bugs. C++ adds RAII, smart pointers, containers, and type-safe abstractions on top of C&amp;rsquo;s manual memory model, and these features genuinely eliminate many of C&amp;rsquo;s most common vulnerabilities, &lt;code&gt;std::string&lt;/code&gt; prevents buffer overflows, &lt;code&gt;std::unique_ptr&lt;/code&gt; prevents memory leaks, and &lt;code&gt;std::vector&lt;/code&gt; provides bounds-checked access via &lt;code&gt;.at()&lt;/code&gt;. But C++ also introduces new attack surfaces that turn out to be even trickier to spot: dangling references from moved-from objects, iterator invalidation, implicit conversions in template code, and the false sense of security that comes from using &amp;ldquo;safe&amp;rdquo; abstractions incorrectly. In this post, I want to cover the C++-specific anti-patterns that survive code review because they look correct to developers who trust the standard library.&lt;/p&gt;</description>
    </item>
    <item>
      <title>C Security: Manual Memory Management and Its Consequences</title>
      <link>https://www.secdev.uk/blog/technology/2025-10-25-c-security-manual-memory-management/</link>
      <pubDate>Sat, 25 Oct 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-10-25-c-security-manual-memory-management/</guid>
      <description>&lt;p&gt;C gives you direct control over memory allocation, pointer arithmetic, and hardware interaction. I respect that. But that control comes with absolutely no safety net: no bounds checking, no garbage collection, no type safety beyond what you enforce manually. Every buffer overflow, use-after-free, double-free, format string vulnerability, and null pointer dereference in C is a direct consequence of this design. C remains the language of operating systems, embedded systems, and performance-critical libraries, so its security pitfalls affect every layer of the software stack. When I started digging into the patterns behind C vulnerabilities, the same shapes kept appearing, from the textbook &lt;code&gt;strcpy&lt;/code&gt; overflow to the subtle integer promotion that bypasses a bounds check. Let me walk through them.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Rust Security: When unsafe Breaks the Promise</title>
      <link>https://www.secdev.uk/blog/technology/2025-10-11-rust-security-unsafe-breaks-promise/</link>
      <pubDate>Sat, 11 Oct 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-10-11-rust-security-unsafe-breaks-promise/</guid>
      <description>&lt;p&gt;I love Rust. I genuinely do. Its ownership system, borrow checker, and type system wipe out entire classes of vulnerabilities at compile time, use-after-free, double-free, data races, null pointer dereferences, buffer overflows. But here&amp;rsquo;s the thing: Rust gives you an escape hatch called &lt;code&gt;unsafe&lt;/code&gt;, and when it&amp;rsquo;s used incorrectly, it reintroduces every single vulnerability that Rust was designed to prevent. The more I dug into real-world Rust codebases, the more I found this happening. Beyond &lt;code&gt;unsafe&lt;/code&gt;, Rust has its own quirky set of security pitfalls: integer overflow behaviour that differs between debug and release builds, FFI boundaries that trust C code unconditionally, and logic errors that the type system simply cannot catch. In this post, I want to walk through the Rust-specific anti-patterns that break the safety promise.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go Security: Goroutines, Error Handling, and Hidden Bugs</title>
      <link>https://www.secdev.uk/blog/technology/2025-09-27-go-security-goroutines-errors/</link>
      <pubDate>Sat, 27 Sep 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-09-27-go-security-goroutines-errors/</guid>
      <description>&lt;p&gt;Go&amp;rsquo;s simplicity is its greatest strength and, I&amp;rsquo;d argue, its most dangerous security property. The language has no exceptions, no generics-based abstractions (until recently), and no implicit behaviour, everything is explicit. But that explicitness creates its own class of vulnerabilities: unchecked errors that silently skip security validation, goroutine races on shared state, HTTP client defaults that follow redirects into internal networks, and string handling patterns that bypass input validation. In this post, I want to walk through the Go-specific anti-patterns that lead to security vulnerabilities, from the error that nobody checked to the goroutine that corrupted the authentication cache. The more I dug into Go&amp;rsquo;s security landscape, the more I realised these bugs are subtle precisely because the language feels so straightforward.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Java Security: From Spring Boot Misconfigs to Deserialization</title>
      <link>https://www.secdev.uk/blog/technology/2025-09-13-java-security-spring-boot-deserialization/</link>
      <pubDate>Sat, 13 Sep 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-09-13-java-security-spring-boot-deserialization/</guid>
      <description>&lt;p&gt;Java has this reputation for being &amp;ldquo;safe&amp;rdquo; because of its type system, managed memory, and mature ecosystem. The more I&amp;rsquo;ve dug into Java security, the more I think that reputation is misleading, and honestly, a bit dangerous. Java&amp;rsquo;s security pitfalls aren&amp;rsquo;t about buffer overflows or memory corruption. They&amp;rsquo;re about the language&amp;rsquo;s powerful runtime features: deserialization, reflection, JNDI lookups, expression languages, and the Spring framework&amp;rsquo;s convention-over-configuration philosophy that silently enables dangerous defaults. In this post I want to walk through the Java-specific anti-patterns that lead to remote code execution, data leaks, and authentication bypasses, from the classic &lt;code&gt;ObjectInputStream&lt;/code&gt; gadget chain to the Spring Boot actuator endpoint that can expose entire environments.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Python Security Pitfalls Every Developer Should Know</title>
      <link>https://www.secdev.uk/blog/technology/2025-08-30-python-security-pitfalls/</link>
      <pubDate>Sat, 30 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-08-30-python-security-pitfalls/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve spent a lot of time reviewing Python codebases, and the language&amp;rsquo;s readability and rapid development cycle are exactly what make it dangerous. Python is the default choice for web services, data pipelines, and automation scripts, and that same ease of use hides security pitfalls that experienced developers walk into regularly. The language&amp;rsquo;s dynamic nature, runtime evaluation, duck typing, implicit conversions, and powerful serialization, creates attack surfaces that simply don&amp;rsquo;t exist in statically typed languages. In this post, I want to cover the Python-specific anti-patterns that lead to real vulnerabilities, from the well-known &lt;code&gt;pickle&lt;/code&gt; deserialization trap to the subtle template injection that can survive code review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Race Conditions</title>
      <link>https://www.secdev.uk/blog/technology/2025-08-16-race-conditions/</link>
      <pubDate>Sat, 16 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-08-16-race-conditions/</guid>
      <description>&lt;p&gt;Race conditions (CWE-362) are, in my opinion, the most insidious class of security bugs you&amp;rsquo;ll encounter. They occur when the behaviour of a program depends on the relative timing of concurrent operations, and at least one of those operations modifies shared state. The window between a check and a subsequent use of the checked value, the classic time-of-check to time-of-use (TOCTOU) pattern, is the most exploited form, but races also show up in counter increments, balance updates, session management, and file operations. What makes race conditions uniquely dangerous is their non-determinism: the bug may not manifest in thousands of test runs, then appear under production load when two requests arrive within microseconds of each other. I want to walk through race conditions in Python, Go, Java, and Rust, from the obvious unprotected counter to the subtle channel-based ordering assumption that passes every test but fails under contention.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Integer Overflow</title>
      <link>https://www.secdev.uk/blog/technology/2025-08-02-integer-overflow/</link>
      <pubDate>Sat, 02 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-08-02-integer-overflow/</guid>
      <description>&lt;p&gt;Integer overflow (CWE-190) is one of those bugs that I find endlessly fascinating because of how quietly destructive it is. It happens when an arithmetic operation produces a value that exceeds the maximum (or falls below the minimum) representable value for the integer type. In C and C++, signed integer overflow is undefined behaviour, the compiler is free to assume it never happens, and optimizations built on that assumption can eliminate bounds checks entirely. Unsigned overflow wraps around silently. Go and Java define overflow as wrapping (two&amp;rsquo;s complement), which prevents undefined behaviour but still produces incorrect results that lead to security vulnerabilities: undersized allocations, bypassed length checks, and negative indices into arrays. Rust panics on overflow in debug mode but wraps in release mode by default, creating a gap between testing and production behaviour that caught me off guard when I first started digging into Rust&amp;rsquo;s safety guarantees. I want to walk through integer overflow across C, C++, Rust, Go, and Java, from the textbook multiplication overflow to the subtle cast truncation that can survive expert review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Null Pointer Dereference</title>
      <link>https://www.secdev.uk/blog/technology/2025-07-19-null-pointer-dereference/</link>
      <pubDate>Sat, 19 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-07-19-null-pointer-dereference/</guid>
      <description>&lt;p&gt;Null pointer dereference (CWE-476) is one of those bugs that shows up across every language, and the more I researched it for this post, the more I was struck by how much damage it can do depending on context. The consequences vary dramatically: C programs crash with a segfault (or worse, the kernel maps page zero and an attacker gets code execution), C++ invokes undefined behaviour that the compiler may optimise into literally anything, Go panics with a nil pointer dereference that kills the goroutine or the whole program, and Java throws a &lt;code&gt;NullPointerException&lt;/code&gt; that can crash the app or leak stack traces to an attacker. MITRE ranks CWE-476 consistently in the top 25 most dangerous software weaknesses, and digging into the CVE data, that ranking is well deserved. I want to walk through C, C++, Go, and Java here, from the obvious unchecked &lt;code&gt;malloc&lt;/code&gt; return to the subtle nil interface trap in Go and the conditional path where null silently propagates through multiple function calls.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Use After Free</title>
      <link>https://www.secdev.uk/blog/technology/2025-07-05-use-after-free/</link>
      <pubDate>Sat, 05 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-07-05-use-after-free/</guid>
      <description>&lt;p&gt;Use-after-free (CWE-416) is one of those bug classes that I wanted to understand deeply because it keeps showing up at the root of high-profile exploits. It occurs when a program continues to use a pointer after the memory it references has been freed. The freed memory may be reallocated for a different purpose, and the dangling pointer now reads or writes data that belongs to a completely different object. Attackers exploit this by controlling what gets allocated into the freed slot, replacing a data buffer with a crafted object that contains a function pointer, then triggering the dangling pointer to call through it. Reading through CVE databases, use-after-free is at the root of hundreds of browser exploits, kernel privilege escalations, and server compromises. This post covers C and C++, from the obvious free-then-use to the subtle shared-pointer aliasing and callback registration patterns that can evade expert review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Out-of-Bounds Writes</title>
      <link>https://www.secdev.uk/blog/technology/2025-06-21-out-of-bounds-writes/</link>
      <pubDate>Sat, 21 Jun 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-06-21-out-of-bounds-writes/</guid>
      <description>&lt;p&gt;Out-of-bounds writes (CWE-787) are the single most dangerous class of memory corruption vulnerabilities on the SANS/CWE Top 25, and they&amp;rsquo;ve held that position for years. The reason is clear once you dig into the mechanics: writing past the end of a buffer can overwrite return addresses, function pointers, vtable entries, and adjacent heap metadata, giving attackers arbitrary code execution. Unlike higher-level languages where the runtime catches array index violations, C and C++ silently corrupt memory, and the consequences may not manifest until thousands of instructions later. Even Rust, with its ownership model, is vulnerable when &lt;code&gt;unsafe&lt;/code&gt; blocks bypass the borrow checker. In this post I&amp;rsquo;ll dissect out-of-bounds writes in C, C++, and Rust, from the classic &lt;code&gt;strcpy&lt;/code&gt; overflow to the subtle off-by-one in pointer arithmetic that can survive expert review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>SSRF</title>
      <link>https://www.secdev.uk/blog/technology/2025-06-07-ssrf/</link>
      <pubDate>Sat, 07 Jun 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-06-07-ssrf/</guid>
      <description>&lt;p&gt;Server-Side Request Forgery is one of those vulnerability classes that I&amp;rsquo;ve grown to respect more and more the deeper I dig into it. The idea is simple, you trick a server into making HTTP requests to destinations you choose, turning it into your personal proxy. It can reach internal services, cloud metadata endpoints, and private networks that you&amp;rsquo;d never touch directly from the outside. OWASP gave SSRF its own category (A10) in 2021, and reading through the rationale, it was overdue. The case studies are striking, a single SSRF against &lt;code&gt;http://169.254.169.254/&lt;/code&gt; on AWS can leak IAM credentials and compromise an entire account. In this post, I&amp;rsquo;ll walk through Python, Java, Go, and JavaScript examples, from the textbook URL-in-a-parameter to the subtle redirect-chain and DNS rebinding variants that make SSRF so hard to defend against.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Logging Failures</title>
      <link>https://www.secdev.uk/blog/technology/2025-05-24-logging-failures/</link>
      <pubDate>Sat, 24 May 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-05-24-logging-failures/</guid>
      <description>&lt;p&gt;When I started researching logging failures for this post, I expected to find dramatic exploit chains. Instead, what I found was something more unsettling, the absence of evidence. The most frustrating thing about incident response isn&amp;rsquo;t finding a sophisticated exploit; it&amp;rsquo;s opening the log aggregator and finding nothing. No entries, no breadcrumbs, no evidence that anything happened at all. That&amp;rsquo;s CWE-778 (Insufficient Logging), and it&amp;rsquo;s the backbone of OWASP A09: Security Logging and Monitoring Failures. This isn&amp;rsquo;t a crash or a data leak in the traditional sense; it&amp;rsquo;s the absence of evidence. When your incident response team can&amp;rsquo;t investigate what was never recorded, the attacker wins by default. In this post, I&amp;rsquo;m going to walk through logging failures across Python, Java, and Go, from the obvious missing-log-statement to the subtle cases where logging exists but captures the wrong data, at the wrong level, or silently drops events under load.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Integrity Failures</title>
      <link>https://www.secdev.uk/blog/technology/2025-05-10-integrity-failures/</link>
      <pubDate>Sat, 10 May 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-05-10-integrity-failures/</guid>
      <description>&lt;p&gt;Integrity failures happen when an application trusts data or code that hasn&amp;rsquo;t been verified, and they can lead to some of the most devastating compromises out there. OWASP A08 covers two patterns I find particularly fascinating: unsafe deserialization (CWE-502), where untrusted data is fed into a deserializer that can execute arbitrary code, and inclusion of functionality from untrusted sources (CWE-829), where the application loads and runs code from URLs, plugins, or scripts without integrity checks. Both patterns share a root cause, the application assumes that incoming data or code is benign. In this post I&amp;rsquo;ll walk through Python, Java, JavaScript, and Go, from the textbook &lt;code&gt;pickle.loads()&lt;/code&gt; to the subtle VM sandbox escapes that can survive expert review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Authentication Failures</title>
      <link>https://www.secdev.uk/blog/technology/2025-04-26-authentication-failures/</link>
      <pubDate>Sat, 26 Apr 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-04-26-authentication-failures/</guid>
      <description>&lt;p&gt;Authentication is the front door of every application, and OWASP A07 documents how often that door is left unlocked. When I started digging into authentication failures, I realised they go far beyond weak passwords, they encompass hardcoded credentials compiled into binaries, brute-force attacks with no rate limiting, password hashes that can be reversed in seconds, and reset flows that hand tokens directly to attackers. These patterns show up in production regularly, sometimes in the same application. This post covers three CWEs across Python, Java, Go, and Rust: CWE-798 (Use of Hard-Coded Credentials), CWE-287 (Improper Authentication), and CWE-307 (Improper Restriction of Excessive Authentication Attempts).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Vulnerable Components</title>
      <link>https://www.secdev.uk/blog/technology/2025-04-12-vulnerable-components/</link>
      <pubDate>Sat, 12 Apr 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-04-12-vulnerable-components/</guid>
      <description>&lt;p&gt;Your application is only as secure as its least-maintained dependency, and this is one of those lessons that really sinks in once you start digging into dependency trees. OWASP A06 (Vulnerable and Outdated Components) covers the reality that most modern applications are more dependency code than application code, and a single outdated library can undermine every security measure you&amp;rsquo;ve built. CWE-1104 captures this: the use of unmaintained third-party components with known vulnerabilities. In this post I&amp;rsquo;ll walk through real dependency chains in Python, Java, and JavaScript, from the Log4Shell-level disasters that make headlines to the subtle version pins that quietly accumulate CVEs while nobody&amp;rsquo;s watching.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Security Misconfiguration</title>
      <link>https://www.secdev.uk/blog/technology/2025-03-29-security-misconfiguration/</link>
      <pubDate>Sat, 29 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-03-29-security-misconfiguration/</guid>
      <description>&lt;p&gt;Security misconfiguration is the vulnerability class that really drove home for me why secure defaults matter more than secure documentation. OWASP A05 covers the gap between what a framework &lt;em&gt;can&lt;/em&gt; do securely and how developers actually configure it. Debug mode left on in production. CORS wide open. XML parsers that resolve external entities. Settings endpoints with no authentication. These aren&amp;rsquo;t coding mistakes, they&amp;rsquo;re configuration mistakes, and they show up everywhere. In this post I&amp;rsquo;ll walk through Python, Java, Go, and JavaScript examples covering CWE-16 (Improper Configuration) and CWE-611 (XML External Entity Processing), from the flags that any reviewer would catch to the subtle combinations that can survive months in production.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Insecure Design</title>
      <link>https://www.secdev.uk/blog/technology/2025-03-15-insecure-design/</link>
      <pubDate>Sat, 15 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-03-15-insecure-design/</guid>
      <description>&lt;p&gt;Insecure design is the vulnerability class that fascinates me the most, because no amount of perfect implementation can fix it. It lives in the architecture, the data flow, the decisions made before anyone wrote a line of code. OWASP A04 captures something that shows up again and again in real-world applications: systems that are insecure by design, not because of a coding mistake, but because the system was never designed to be secure in the first place. In this post, I want to focus on two of the most common manifestations: verbose error messages that leak internal details (CWE-209) and insufficiently protected credentials (CWE-522). I&amp;rsquo;ll walk through Python, Java, and JavaScript examples that range from the immediately obvious to the patterns that, from what I&amp;rsquo;ve seen in code reviews, can survive months without being caught.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Cryptographic Failures That Pass Code Review</title>
      <link>https://www.secdev.uk/blog/technology/2025-03-01-cryptographic-failures-that-pass-code-review/</link>
      <pubDate>Sat, 01 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-03-01-cryptographic-failures-that-pass-code-review/</guid>
      <description>&lt;p&gt;Cryptographic code is uniquely dangerous, and it&amp;rsquo;s one of the areas I find most challenging to review. The reason is simple: it can be completely wrong and still appear to work perfectly. A broken hash function still produces a hash. A weak cipher still encrypts and decrypts. A predictable random number generator still generates numbers. The application runs, tests pass, and the vulnerability sits quietly until an attacker exploits it. In this post, I want to walk through the cryptographic failures that routinely survive code review across Python, Java, Go, and Rust, from the obvious use of MD5 to the subtle misuse of otherwise strong primitives.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Broken Access Control</title>
      <link>https://www.secdev.uk/blog/technology/2025-02-15-broken-access-control/</link>
      <pubDate>Sat, 15 Feb 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-02-15-broken-access-control/</guid>
      <description>&lt;p&gt;Broken access control sits at the top of the OWASP Top 10 for good reason, and it&amp;rsquo;s the vulnerability class I find most fascinating to research. It&amp;rsquo;s the most common serious vulnerability in modern web applications, and it&amp;rsquo;s almost entirely a logic problem, no amount of input sanitization or encryption fixes it. The application simply fails to verify that the authenticated user is authorized to perform the requested action on the requested resource. In this post, I want to walk through the patterns that show up across Python, Java, and Go, from the IDOR that any pentester would find in minutes to the subtle authorization gaps that can survive months of code review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>XSS Is Not Just a JavaScript Problem</title>
      <link>https://www.secdev.uk/blog/technology/2025-02-01-xss-is-not-just-a-javascript-problem/</link>
      <pubDate>Sat, 01 Feb 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-02-01-xss-is-not-just-a-javascript-problem/</guid>
      <description>&lt;p&gt;Cross-site scripting gets framed as a front-end problem a lot, something that happens in JavaScript and gets fixed with JavaScript. But the more I dug into this, the clearer it became that XSS vulnerabilities almost always originate on the server side, in whatever language is generating the HTML. I&amp;rsquo;ve found XSS in Python templates, Java JSPs, Go&amp;rsquo;s &lt;code&gt;html/template&lt;/code&gt; misuse, Rust web frameworks, and server-rendered JavaScript. The language you write your backend in determines which XSS patterns you&amp;rsquo;ll run into and which ones will sneak past your review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Command Injection Beyond os.system</title>
      <link>https://www.secdev.uk/blog/technology/2025-01-18-command-injection-beyond-os-system/</link>
      <pubDate>Sat, 18 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-01-18-command-injection-beyond-os-system/</guid>
      <description>&lt;p&gt;When most developers hear &amp;ldquo;command injection,&amp;rdquo; they think of &lt;code&gt;os.system()&lt;/code&gt; in Python or &lt;code&gt;Runtime.exec()&lt;/code&gt; in Java. Those are the textbook examples, and most teams know to avoid them. But the more I researched this topic, the more I realised that command injection surfaces through dozens of less obvious APIs across every language, subprocess pipes, shell expansions, backtick operators, and even seemingly safe exec functions that become dangerous with the wrong arguments. This is one of my favourite vulnerability classes to dig into because the attack surface is so much wider than people realise. Let me walk you through command injection patterns across seven languages, from the obvious to the genuinely subtle.&lt;/p&gt;</description>
    </item>
    <item>
      <title>SQL Injection Across Languages</title>
      <link>https://www.secdev.uk/blog/technology/2025-01-04-sql-injection-across-languages/</link>
      <pubDate>Sat, 04 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-01-04-sql-injection-across-languages/</guid>
      <description>&lt;p&gt;SQL injection is one of those vulnerability classes that refuses to go away, no matter how much the industry talks about it. I&amp;rsquo;ve been digging into how it manifests across different languages, Python, Java, Go, and JavaScript, and the root cause is always the same: untrusted input reaches a SQL query without proper parameterization. But the way developers introduce it varies wildly depending on the framework, ORM, and idioms of each language. In this post, I want to walk through real examples across these four languages, showing both the obvious patterns that any reviewer would catch and the subtle ones that slip through code review more often than you&amp;rsquo;d expect.&lt;/p&gt;</description>
    </item>
    <item>
      <title></title>
      <link>https://www.secdev.uk/blog/technology/2026-04-20-secure-defaults/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-04-20-secure-defaults/</guid>
      <description></description>
    </item>
    <item>
      <title></title>
      <link>https://www.secdev.uk/blog/technology/2026-07-08-reversing-6502-part-2/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-07-08-reversing-6502-part-2/</guid>
      <description></description>
    </item>
  </channel>
</rss>
