<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>guy@secdev.uk</title>
    <link>https://www.secdev.uk/blog/</link>
    <description>Recent content on guy@secdev.uk</description>
    <generator>Hugo</generator>
    <language>en-gb</language>
    <copyright>Guy Dixon | guy@secdev.uk</copyright>
    <lastBuildDate>Sat, 04 Apr 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://www.secdev.uk/blog/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Rust Security Code Review Guide</title>
      <link>https://www.secdev.uk/blog/articles/rust_security_review_guide/</link>
      <pubDate>Sat, 04 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/articles/rust_security_review_guide/</guid>
      <description>&lt;h2 id=&#34;1-introduction&#34;&gt;1. Introduction&lt;/h2&gt;&#xA;&lt;p&gt;I put this guide together as a structured approach to security-focused code review for Rust applications. Whether you&amp;rsquo;re just starting to identify security vulnerabilities in Rust code or you&amp;rsquo;re an experienced developer looking for a language-specific checklist, I&amp;rsquo;ve tried to make it useful at both levels.&lt;/p&gt;&#xA;&lt;p&gt;Rust&amp;rsquo;s ownership model, borrow checker, and type system prevent entire classes of bugs, use-after-free, null pointer dereferences, and data races in safe code. However, what I found when I started reviewing Rust codebases is that &lt;code&gt;unsafe&lt;/code&gt; blocks, third-party crate choices, and application-level logic errors still introduce serious vulnerabilities. What follows covers manual review strategies, common anti-patterns, recommended tooling, and vulnerability patterns organised by class, with cross-references to the intentionally vulnerable examples in this project.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The New Shape of the Engineering Team</title>
      <link>https://www.secdev.uk/blog/leadership/4.3-the-new-shape-of-the-engineering-team/</link>
      <pubDate>Mon, 30 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/4.3-the-new-shape-of-the-engineering-team/</guid>
      <description>&lt;p&gt;If AI can handle the boilerplate, the scaffolding, and a growing portion of routine implementation, what does that mean for how we compose engineering teams? It&amp;rsquo;s a question I&amp;rsquo;ve been turning over for a while now, and the answers I keep arriving at are uncomfortable.&lt;/p&gt;&#xA;&lt;p&gt;The optimistic version is that AI frees engineers to focus on higher-value work, system design, problem framing, user understanding, architectural thinking. The pessimistic version is that it eliminates the entry-level work that junior engineers have traditionally used to learn the craft. The realistic version is probably somewhere in between, and navigating it well is one of the most important challenges facing technical leaders right now.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Vintage Adventures - MOS 6502 - Part 2</title>
      <link>https://www.secdev.uk/blog/technology/2026-03-28-vintage-adventures-6502-part-2/</link>
      <pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-03-28-vintage-adventures-6502-part-2/</guid>
      <description>&lt;p&gt;In Part 1 we covered the MOS 6502&amp;rsquo;s architecture, walked through its instruction set, and decoded a small test program by hand. That gave us the basic structure of a CPU emulator:&lt;/p&gt;&#xA;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;while running:&#xA;    opcode = memory[PC]&#xA;    instruction = decode(opcode)&#xA;    instruction.execute(operands)&#xA;    PC += instruction.length&#xA;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The pseudocode is clean, but there are obvious pieces missing. We need to define the decode function, implement the execution logic for each instruction, and emulate both the memory and the registers. Time to turn that sketch into real code.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Rethinking Developer Productivity in the Age of AI Assistants</title>
      <link>https://www.secdev.uk/blog/leadership/4.2-rethinking-developer-productivity/</link>
      <pubDate>Mon, 23 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/4.2-rethinking-developer-productivity/</guid>
      <description>&lt;p&gt;Lines of code per day was always a terrible productivity metric. With AI coding assistants, it&amp;rsquo;s become an absurd one. An engineer with Copilot or a similar tool can generate code at a pace that would have been unimaginable five years ago. If you&amp;rsquo;re measuring productivity by volume, every engineer just got a 3x raise. If you&amp;rsquo;re measuring it by value delivered, the picture is more complicated.&lt;/p&gt;&#xA;&lt;p&gt;The more I&amp;rsquo;ve explored how AI assistants change engineering workflows, the more I&amp;rsquo;ve realised that the productivity conversation needs a fundamental reset. We&amp;rsquo;re not just doing the same work faster, we&amp;rsquo;re changing what the work is.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Vintage Adventures - MOS 6502 - Part 1</title>
      <link>https://www.secdev.uk/blog/technology/2026-03-21-vintage-adventures-6502-part-1/</link>
      <pubDate>Sat, 21 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-03-21-vintage-adventures-6502-part-1/</guid>
      <description>&lt;p&gt;This next set of posts are a bit of a distraction from security themes articles, and we&amp;rsquo;ll explore some vintage computer hardware.&lt;/p&gt;&#xA;&lt;p&gt;The MOS 6502 is a classic CPU that drove the home computer revolution in the late 1970s and early 1980s. Along with the Zilog Z80, it brought computing to the masses. The 6502 powered some of the most iconic machines of the era, the Apple II, the Commodore 64, the Atari 2600, and the British-built BBC Micro, among others. It even found its way into the original Nintendo Entertainment System (as the Ricoh 2A03, a modified 6502).&lt;/p&gt;</description>
    </item>
    <item>
      <title>The AI Literacy Gap: Why Your Leadership Team Needs to Catch Up</title>
      <link>https://www.secdev.uk/blog/leadership/4.1-the-ai-literacy-gap/</link>
      <pubDate>Mon, 16 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/4.1-the-ai-literacy-gap/</guid>
      <description>&lt;p&gt;With the emergance of AI/GenAI, I felt that I needed to think, and write, about the impact of AI/GenAI on technology leadership. The next series of articles will be exporing this topic. As an emerging topic, these articles may well be shorter, and I will try and publish them weekly when possible.&lt;/p&gt;&#xA;&lt;p&gt;Most engineering leadership teams I talk to have a wide spread of AI understanding. On one end, you&amp;rsquo;ve got the enthusiasts, they&amp;rsquo;ve built agents, they use AI coding assistants daily, they can explain transformer architectures over a pint. On the other end, you&amp;rsquo;ve got leaders who haven&amp;rsquo;t meaningfully engaged with any of it. They&amp;rsquo;ve read the headlines, they&amp;rsquo;ve sat through a vendor demo, and they&amp;rsquo;re quietly hoping this is a hype cycle that will pass.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Memory Safety Without Rust: Defensive C and C&#43;&#43; Patterns</title>
      <link>https://www.secdev.uk/blog/technology/2026-03-14-memory-safety-without-rust/</link>
      <pubDate>Sat, 14 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-03-14-memory-safety-without-rust/</guid>
      <description>&lt;p&gt;I hear &amp;ldquo;just rewrite it in Rust&amp;rdquo; a lot these days, and while Rust&amp;rsquo;s ownership model genuinely does eliminate entire classes of memory safety bugs at compile time, that advice ignores reality. The vast majority of systems code &amp;ndash; operating systems, embedded firmware, database engines, network stacks &amp;ndash; is written in C and C++ and will remain so for decades. Rewriting is not always an option. So I wanted to dig into the defensive patterns, compiler features, and runtime tools that bring memory safety closer to C and C++ codebases without a language migration. What I found is that while none of these approaches match Rust&amp;rsquo;s compile-time guarantees, the combination of them makes a real difference.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Rust Learning Guide - Anti-Patterns</title>
      <link>https://www.secdev.uk/blog/articles/rust_antipatterns/</link>
      <pubDate>Wed, 11 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/articles/rust_antipatterns/</guid>
      <description>&lt;h2 id=&#34;overview&#34;&gt;Overview&lt;/h2&gt;&#xA;&lt;p&gt;The &lt;code&gt;antipatterns.rs&lt;/code&gt; module &lt;a href=&#34;https://github.com/guyadixon/RustLearning/blob/main/src/antipatterns.rs&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;&#xA;    Source Code File&#xA;&lt;/a&gt;&#xA; demonstrates 15 common Rust mistakes and their correct solutions. I put this together because I noticed the same patterns coming up, especially from developers (myself included) fighting the borrow checker instead of working with it.&lt;/p&gt;&#xA;&lt;h2 id=&#34;anti-patterns-covered&#34;&gt;Anti-Patterns Covered&lt;/h2&gt;&#xA;&lt;h3 id=&#34;1-unnecessary-clone-everywhere&#34;&gt;1. &lt;strong&gt;Unnecessary .clone() Everywhere&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Cloning to avoid borrow checker&lt;/li&gt;&#xA;&lt;li&gt;✅ Use references (&amp;amp;T) instead&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;2-using-unwrap-in-production&#34;&gt;2. &lt;strong&gt;Using .unwrap() in Production&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Crashes program on error&lt;/li&gt;&#xA;&lt;li&gt;✅ Return Result&amp;lt;T, E&amp;gt; and handle gracefully&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;3-returning-references-to-local-variables&#34;&gt;3. &lt;strong&gt;Returning References to Local Variables&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Dangling references (won&amp;rsquo;t compile)&lt;/li&gt;&#xA;&lt;li&gt;✅ Return owned data or use &amp;lsquo;static lifetime&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;4-using-string-when-str-would-work&#34;&gt;4. &lt;strong&gt;Using String When &amp;amp;str Would Work&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Forces caller to own String&lt;/li&gt;&#xA;&lt;li&gt;✅ Accept &amp;amp;str - works with both&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;5-ignoring-compiler-warnings&#34;&gt;5. &lt;strong&gt;Ignoring Compiler Warnings&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Unused mut, unused variables&lt;/li&gt;&#xA;&lt;li&gt;✅ Listen to the compiler&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;6-using-indices-instead-of-iterators&#34;&gt;6. &lt;strong&gt;Using Indices Instead of Iterators&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ C-style loops can panic&lt;/li&gt;&#xA;&lt;li&gt;✅ Use iterators - safer and clearer&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;7-manually-implementing-what-traits-provide&#34;&gt;7. &lt;strong&gt;Manually Implementing What Traits Provide&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Manual equality checks&lt;/li&gt;&#xA;&lt;li&gt;✅ #[derive(PartialEq, Debug, Clone)]&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;8-using-vec-when-array-would-work&#34;&gt;8. &lt;strong&gt;Using Vec When Array Would Work&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Heap allocation for fixed-size data&lt;/li&gt;&#xA;&lt;li&gt;✅ Use arrays [T; N] for stack allocation&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;9-nested-matchif-let-instead-of-combinators&#34;&gt;9. &lt;strong&gt;Nested match/if let Instead of Combinators&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Deeply nested matching&lt;/li&gt;&#xA;&lt;li&gt;✅ Use .map(), .filter(), .and_then()&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;10-using-mutex-when-not-needed&#34;&gt;10. &lt;strong&gt;Using Mutex When Not Needed&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Mutex in single-threaded code&lt;/li&gt;&#xA;&lt;li&gt;✅ Just use regular variables&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;11-collecting-iterator-just-to-iterate-again&#34;&gt;11. &lt;strong&gt;Collecting Iterator Just to Iterate Again&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Unnecessary intermediate collection&lt;/li&gt;&#xA;&lt;li&gt;✅ Chain iterators directly&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;12-using-deref-coercion-as-inheritance&#34;&gt;12. &lt;strong&gt;Using Deref Coercion as Inheritance&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Simulating inheritance with Deref&lt;/li&gt;&#xA;&lt;li&gt;✅ Use composition explicitly&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;13-panicking-in-library-code&#34;&gt;13. &lt;strong&gt;Panicking in Library Code&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ panic!() crashes caller&amp;rsquo;s program&lt;/li&gt;&#xA;&lt;li&gt;✅ Return Result and let caller decide&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;14-using-format-when-to_&#34;&gt;14. &lt;strong&gt;Using format! When to_string() Works&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Overkill for simple conversions&lt;/li&gt;&#xA;&lt;li&gt;✅ Use .to_string() for clarity&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;15-implementing-default-manually&#34;&gt;15. &lt;strong&gt;Implementing Default Manually&lt;/strong&gt;&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;❌ Manual new() constructor&lt;/li&gt;&#xA;&lt;li&gt;✅ Implement Default trait&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;key-principle&#34;&gt;Key Principle&lt;/h2&gt;&#xA;&lt;p&gt;&lt;strong&gt;Work WITH Rust&amp;rsquo;s ownership system, not against it.&lt;/strong&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deserialization Attacks: From Pickle to ObjectInputStream</title>
      <link>https://www.secdev.uk/blog/technology/2026-02-28-deserialization-attacks/</link>
      <pubDate>Sat, 28 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-02-28-deserialization-attacks/</guid>
      <description>&lt;p&gt;Deserialization vulnerabilities are some of the scariest bugs in application security, because when they&amp;rsquo;re exploitable, it&amp;rsquo;s almost always remote code execution. The core problem is that an application reconstructs objects from untrusted data without validating what types are being instantiated. In languages with powerful serialization mechanisms &amp;ndash; Python&amp;rsquo;s &lt;code&gt;pickle&lt;/code&gt;, Java&amp;rsquo;s &lt;code&gt;ObjectInputStream&lt;/code&gt;, PHP&amp;rsquo;s &lt;code&gt;unserialize&lt;/code&gt; &amp;ndash; an attacker can craft serialized payloads that execute arbitrary code during the deserialization process itself. The more I researched how these attacks work across languages, the more I appreciated how a single API call can turn into a full server compromise.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Anti-Patterns That Kill Teams</title>
      <link>https://www.secdev.uk/blog/leadership/3.10-the-anti-patterns-that-kill-teams/</link>
      <pubDate>Mon, 23 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/3.10-the-anti-patterns-that-kill-teams/</guid>
      <description>&lt;p&gt;Most team dysfunction isn&amp;rsquo;t unique. It follows recognisable patterns that repeat across organisations, industries, and decades. The good news is that if you can name the pattern, you&amp;rsquo;re halfway to fixing it. The bad news is that most leaders don&amp;rsquo;t recognise the patterns until the damage is well advanced.&lt;/p&gt;&#xA;&lt;p&gt;Osmani catalogues these anti-patterns extensively, and I&amp;rsquo;ve encountered most of them across my career. Here are the ones that I&amp;rsquo;ve seen do the most damage, organised by where they originate.&lt;/p&gt;</description>
    </item>
    <item>
      <title>CORS Misconfiguration: The Open Door You Didn&#39;t Know About</title>
      <link>https://www.secdev.uk/blog/technology/2026-02-14-cors-misconfiguration/</link>
      <pubDate>Sat, 14 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-02-14-cors-misconfiguration/</guid>
      <description>&lt;p&gt;CORS misconfiguration is one of those vulnerabilities that keeps coming up because most developers don&amp;rsquo;t fully understand what CORS actually does. It&amp;rsquo;s the browser mechanism that controls which websites can make requests to your API. When it&amp;rsquo;s configured correctly, it prevents malicious sites from stealing data through a victim&amp;rsquo;s browser. When it&amp;rsquo;s misconfigured, and this happens constantly based on public bug bounty reports, it effectively disables the Same-Origin Policy, letting any website read authenticated responses from your API. What makes CORS misconfigurations particularly interesting to study is that they&amp;rsquo;re invisible to users, silent in server logs, and trivial to exploit.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Rust Learning Guide - Core Concepts Explained</title>
      <link>https://www.secdev.uk/blog/articles/rust_learning_guide/</link>
      <pubDate>Fri, 06 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/articles/rust_learning_guide/</guid>
      <description>&lt;p&gt;I put this guide together to explain each Rust concept demonstrated in the example programs, with comparisons to C and Java. The more I dug into Rust&amp;rsquo;s design decisions, the more I appreciated how much thought went into making safety guarantees that don&amp;rsquo;t cost you performance. Here&amp;rsquo;s what I found.&lt;/p&gt;&#xA;&lt;p&gt;&lt;a href=&#34;https://github.com/guyadixon/RustLearning&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;&#xA;    Supporting Code Examples&#xA;&lt;/a&gt;&#xA;&lt;/p&gt;&#xA;&lt;hr&gt;&#xA;&lt;h2 id=&#34;1-ownership-system&#34;&gt;1. OWNERSHIP SYSTEM&lt;/h2&gt;&#xA;&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Rust&amp;rsquo;s core memory management system. Every value has a single owner, and when the owner goes out of scope, the value is dropped (freed). This was the concept that took the longest to click for me, but once it did, everything else fell into place.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Culture Is What You Do When Things Go Wrong</title>
      <link>https://www.secdev.uk/blog/leadership/3.9-culture-is-what-you-do-when-things-go-wrong/</link>
      <pubDate>Mon, 02 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/3.9-culture-is-what-you-do-when-things-go-wrong/</guid>
      <description>&lt;p&gt;Every company I&amp;rsquo;ve worked at had values written down somewhere. Integrity. Innovation. Collaboration. Respect. They were on the website, in the onboarding deck, sometimes literally on the walls. And in most cases, they told you almost nothing about what it was actually like to work there.&lt;/p&gt;&#xA;&lt;p&gt;Culture isn&amp;rsquo;t what you say you value. It&amp;rsquo;s what you do, especially when things go wrong. Ines Sombra puts it perfectly: &amp;ldquo;Culture is what happens when what we want to believe about ourselves is challenged. Culture is what we do when we get things wrong, when we witness a violation of trust, or when we stay silent when an inappropriate comment is said in our presence.&amp;rdquo;&lt;/p&gt;</description>
    </item>
    <item>
      <title>XXE Attacks: XML Parsing Gone Wrong</title>
      <link>https://www.secdev.uk/blog/technology/2026-01-31-xxe-attacks/</link>
      <pubDate>Sat, 31 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-01-31-xxe-attacks/</guid>
      <description>&lt;p&gt;XML External Entity injection is one of those vulnerabilities that fascinated me the more I dug into it. The core issue is that the XML spec supports external entities, a feature that lets XML documents pull in content from external sources, and most parsers enable this by default. When an app parses untrusted XML without disabling that feature, an attacker can read arbitrary files off the server, perform SSRF, and sometimes even get remote code execution. What surprised me most when researching this was how straightforward the exploitation is compared to how long these bugs survive in production, the attack payloads are simple, but the parser defaults are so permissive that developers often have no idea the risk exists.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Secrets in Source Code: Finding and Eliminating Hardcoded Credentials</title>
      <link>https://www.secdev.uk/blog/technology/2026-01-17-secrets-in-source-code/</link>
      <pubDate>Sat, 17 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-01-17-secrets-in-source-code/</guid>
      <description>&lt;p&gt;Hardcoded credentials are one of the most common and most preventable vulnerability classes out there. API keys, database passwords, encryption keys, and service tokens embedded directly in source code end up in version control, build artifacts, container images, and log files. Once a secret reaches a Git repository, it persists in the history even after the offending line is deleted. When I started researching how often this happens in practice, the numbers were staggering, public reports of leaked credentials on GitHub alone run into the millions per year. In this post I&amp;rsquo;ll cover the patterns that lead to hardcoded secrets, the tools that detect them, and the architecture changes that eliminate them for good.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Scaling Your Team Without Breaking What Works</title>
      <link>https://www.secdev.uk/blog/leadership/3.8-scaling-your-team-without-breaking-what-works/</link>
      <pubDate>Mon, 12 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/3.8-scaling-your-team-without-breaking-what-works/</guid>
      <description>&lt;p&gt;I lived through this at a growth start-up where the engineering team tripled in under two years. At fifteen people, we were fast, aligned, and effective. Everyone knew what everyone else was working on. Decisions happened in hallway conversations. The culture was strong because it was small enough to be transmitted through proximity.&lt;/p&gt;&#xA;&lt;p&gt;At forty-five people, almost none of that was true. Communication had broken down. New hires didn&amp;rsquo;t understand the culture because nobody had time to transmit it. Decisions that used to take minutes now took weeks because the number of stakeholders had multiplied. We were bigger, but we weren&amp;rsquo;t better, and for a painful period, we were actively worse.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go Learning Guide</title>
      <link>https://www.secdev.uk/blog/articles/go-guide/</link>
      <pubDate>Tue, 06 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/articles/go-guide/</guid>
      <description>&lt;p&gt;A structured learning guide for developers coming from Java, C, or Python who want to learn Go. I put this together because when I started learning Go myself, I kept wishing for a resource that mapped Go&amp;rsquo;s idioms back to languages I already knew.&#xA;&lt;a href=&#34;https://github.com/guyadixon/GoLearning&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;&#xA;    Github Repo of Code Examples&#xA;&lt;/a&gt;&#xA;&lt;/p&gt;&#xA;&lt;h2 id=&#34;introduction-to-go&#34;&gt;Introduction to Go&lt;/h2&gt;&#xA;&lt;p&gt;Go (also called Golang) was created at Google in 2009 by Robert Griesemer, Rob Pike, and Ken Thompson. It was designed to address the challenges of building large-scale, concurrent software systems while keeping the language simple and productive.&lt;/p&gt;</description>
    </item>
    <item>
      <title>String Formatting and Security: A Cross-Language Minefield</title>
      <link>https://www.secdev.uk/blog/technology/2026-01-03-string-formatting-and-security/</link>
      <pubDate>Sat, 03 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-01-03-string-formatting-and-security/</guid>
      <description>&lt;p&gt;String formatting is one of those operations that&amp;rsquo;s everywhere, and it&amp;rsquo;s more dangerous than most developers realise when user input gets involved. Every language provides multiple ways to build strings from dynamic data, and each mechanism carries different security implications. From C&amp;rsquo;s &lt;code&gt;printf&lt;/code&gt; family, where a format string bug can read and write arbitrary memory, to Python&amp;rsquo;s f-strings that can execute attribute lookups, the attack surface is broader than most people think. I wanted to map out the full landscape across languages, and what I found was that each mechanism breaks down in its own unique and sometimes surprising way.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Running Effective Retrospectives (and Actually Following Up)</title>
      <link>https://www.secdev.uk/blog/leadership/3.7-running-effective-retrospectives/</link>
      <pubDate>Mon, 22 Dec 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/3.7-running-effective-retrospectives/</guid>
      <description>&lt;p&gt;Retrospectives should be the engine of continuous improvement. In practice, they&amp;rsquo;re often the meeting everyone tolerates but nobody values, a ritual performed out of obligation rather than conviction, producing sticky notes that get photographed and forgotten.&lt;/p&gt;&#xA;&lt;p&gt;I&amp;rsquo;ve run hundreds of retrospectives over the years, and I&amp;rsquo;ve wasted more of them than I&amp;rsquo;d like to admit. The ones that worked had a few things in common. The ones that didn&amp;rsquo;t were all failing in the same ways.&lt;/p&gt;</description>
    </item>
    <item>
      <title>SAST Tools Compared: What They Catch and What They Miss</title>
      <link>https://www.secdev.uk/blog/technology/2025-12-20-sast-tools-compared/</link>
      <pubDate>Sat, 20 Dec 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-12-20-sast-tools-compared/</guid>
      <description>&lt;p&gt;Static Application Security Testing (SAST) tools are the first line of automated defence against vulnerabilities in source code. They analyse code without executing it, looking for patterns that match known vulnerability classes. But here&amp;rsquo;s the thing, no single tool catches everything, and the differences between tools in detection capability, false positive rates, and language support are significant. I wanted to understand exactly where the gaps are, so I spent time running these tools against intentionally vulnerable code and comparing their output. This post is my honest assessment of what they actually catch, what they miss, and where manual review has to pick up the slack.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Art of the Subtle Bug: Nuanced Vulnerabilities That Evade Review</title>
      <link>https://www.secdev.uk/blog/technology/2025-12-06-the-art-of-the-subtle-bug/</link>
      <pubDate>Sat, 06 Dec 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-12-06-the-art-of-the-subtle-bug/</guid>
      <description>&lt;p&gt;The vulnerabilities that cause real breaches are rarely the textbook examples. They&amp;rsquo;re the ones that survive multiple rounds of code review, pass SAST scans, and sit in production for years. The more I researched these nuanced bugs, the more I realised what makes them dangerous: they exploit assumptions reviewers make about language behaviour, framework internals, or data flow boundaries. This post dissects the patterns that make a vulnerability subtle and walks through real examples that show why even experienced reviewers still miss them.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Designing Good Process (Without Becoming a Bureaucracy)</title>
      <link>https://www.secdev.uk/blog/leadership/3.6-designing-good-process/</link>
      <pubDate>Mon, 01 Dec 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/3.6-designing-good-process/</guid>
      <description>&lt;p&gt;Engineers hate process. Or rather, engineers hate &lt;em&gt;bad&lt;/em&gt; process, and most process they encounter is bad. It&amp;rsquo;s imposed from above without understanding the problem it&amp;rsquo;s meant to solve. It adds overhead without adding value. It treats every situation the same regardless of risk or complexity. No wonder the word &amp;ldquo;process&amp;rdquo; makes people flinch.&lt;/p&gt;&#xA;&lt;p&gt;But the absence of process isn&amp;rsquo;t freedom, it&amp;rsquo;s chaos. Having worked in both process-light start-ups and process-heavy corporates, I&amp;rsquo;ve seen both failure modes up close. The start-up where nobody knew how deployments worked because there was no documented process. The corporate where deploying a one-line change required three approvals and a change advisory board meeting. Neither extreme serves the team.&lt;/p&gt;</description>
    </item>
    <item>
      <title>JavaScript Security: Prototype Pollution to Supply Chain Attacks</title>
      <link>https://www.secdev.uk/blog/technology/2025-11-22-javascript-security-prototype-pollution/</link>
      <pubDate>Sat, 22 Nov 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-11-22-javascript-security-prototype-pollution/</guid>
      <description>&lt;p&gt;JavaScript is the one language I can never escape, it&amp;rsquo;s on both sides of the web. In the browser it handles user interaction and DOM manipulation, and on the server Node.js powers APIs, microservices, and build tools. This dual nature creates an attack surface that&amp;rsquo;s uniquely challenging to secure. Browser-side JavaScript faces XSS, DOM clobbering, and postMessage abuse. Server-side JavaScript faces prototype pollution, dependency confusion, ReDoS, and the vast npm ecosystem where a single malicious package can compromise thousands of applications. In this post, I want to walk through the JavaScript-specific anti-patterns that keep coming up, from the prototype chain manipulation that poisons every object in the runtime to the regex that freezes your server.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Outcomes Over Outputs: Measuring What Matters</title>
      <link>https://www.secdev.uk/blog/leadership/3.5-outcomes-over-outputs-measuring-what-matters/</link>
      <pubDate>Mon, 10 Nov 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/3.5-outcomes-over-outputs-measuring-what-matters/</guid>
      <description>&lt;p&gt;Early in my career, I inherited a team with a beautiful dashboard. Every metric was green. Velocity was up. Story points completed per sprint were trending in the right direction. Code coverage was above the target. Release frequency was on schedule. By every measure on that dashboard, the team was performing brilliantly.&lt;/p&gt;&#xA;&lt;p&gt;The product they were building had almost no users.&lt;/p&gt;&#xA;&lt;p&gt;That was my introduction to what Osmani calls the watermelon effect, metrics that look green on the surface but are red underneath. The team was producing outputs at an impressive rate. They just weren&amp;rsquo;t producing outcomes that mattered.&lt;/p&gt;</description>
    </item>
    <item>
      <title>C&#43;&#43; Security: Smart Pointers Aren&#39;t Always Smart Enough</title>
      <link>https://www.secdev.uk/blog/technology/2025-11-08-cpp-security-smart-pointers/</link>
      <pubDate>Sat, 08 Nov 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-11-08-cpp-security-smart-pointers/</guid>
      <description>&lt;p&gt;The more I dug into C++ codebases, the more I noticed a recurring assumption: developers who think that switching to smart pointers and STL containers means they&amp;rsquo;re safe from memory bugs. C++ adds RAII, smart pointers, containers, and type-safe abstractions on top of C&amp;rsquo;s manual memory model, and these features genuinely eliminate many of C&amp;rsquo;s most common vulnerabilities, &lt;code&gt;std::string&lt;/code&gt; prevents buffer overflows, &lt;code&gt;std::unique_ptr&lt;/code&gt; prevents memory leaks, and &lt;code&gt;std::vector&lt;/code&gt; provides bounds-checked access via &lt;code&gt;.at()&lt;/code&gt;. But C++ also introduces new attack surfaces that turn out to be even trickier to spot: dangling references from moved-from objects, iterator invalidation, implicit conversions in template code, and the false sense of security that comes from using &amp;ldquo;safe&amp;rdquo; abstractions incorrectly. In this post, I want to cover the C++-specific anti-patterns that survive code review because they look correct to developers who trust the standard library.&lt;/p&gt;</description>
    </item>
    <item>
      <title>C Security: Manual Memory Management and Its Consequences</title>
      <link>https://www.secdev.uk/blog/technology/2025-10-25-c-security-manual-memory-management/</link>
      <pubDate>Sat, 25 Oct 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-10-25-c-security-manual-memory-management/</guid>
      <description>&lt;p&gt;C gives you direct control over memory allocation, pointer arithmetic, and hardware interaction. I respect that. But that control comes with absolutely no safety net: no bounds checking, no garbage collection, no type safety beyond what you enforce manually. Every buffer overflow, use-after-free, double-free, format string vulnerability, and null pointer dereference in C is a direct consequence of this design. C remains the language of operating systems, embedded systems, and performance-critical libraries, so its security pitfalls affect every layer of the software stack. When I started digging into the patterns behind C vulnerabilities, the same shapes kept appearing, from the textbook &lt;code&gt;strcpy&lt;/code&gt; overflow to the subtle integer promotion that bypasses a bounds check. Let me walk through them.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Python Security Code Review Guide</title>
      <link>https://www.secdev.uk/blog/articles/python_security_review_guide/</link>
      <pubDate>Wed, 22 Oct 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/articles/python_security_review_guide/</guid>
      <description>&lt;h2 id=&#34;1-introduction&#34;&gt;1. Introduction&lt;/h2&gt;&#xA;&lt;p&gt;I put this guide together as a structured approach to security-focused code review for Python applications. Whether you&amp;rsquo;re just starting to identify security vulnerabilities in Python code or you&amp;rsquo;re an experienced developer looking for a language-specific checklist, I&amp;rsquo;ve tried to make it useful at both levels.&lt;/p&gt;&#xA;&lt;p&gt;Python&amp;rsquo;s dynamic typing, rich standard library, and extensive third-party ecosystem make it enormously productive, but the more I reviewed Python codebases, the more I realised these same qualities introduce security pitfalls that static analysis alone cannot always catch. What follows covers manual review strategies, common anti-patterns, recommended tooling, and vulnerability patterns organised by class, with cross-references to the intentionally vulnerable examples in this project.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Case for Team Stability</title>
      <link>https://www.secdev.uk/blog/leadership/3.4-the-case-for-team-stability/</link>
      <pubDate>Mon, 20 Oct 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/3.4-the-case-for-team-stability/</guid>
      <description>&lt;p&gt;There&amp;rsquo;s a persistent belief in engineering organisations that moving people between teams is healthy, it spreads knowledge, prevents silos, and keeps things fresh. I understand the appeal. I&amp;rsquo;ve also watched it destroy team effectiveness more times than I can count.&lt;/p&gt;&#xA;&lt;p&gt;The research is clear, and my experience confirms it: stable teams outperform shuffled ones. Not by a little, by a lot. And the reasons are rooted in how humans actually build trust and develop working relationships, not in how org charts look on a slide.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Rust Security: When unsafe Breaks the Promise</title>
      <link>https://www.secdev.uk/blog/technology/2025-10-11-rust-security-unsafe-breaks-promise/</link>
      <pubDate>Sat, 11 Oct 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-10-11-rust-security-unsafe-breaks-promise/</guid>
      <description>&lt;p&gt;I love Rust. I genuinely do. Its ownership system, borrow checker, and type system wipe out entire classes of vulnerabilities at compile time, use-after-free, double-free, data races, null pointer dereferences, buffer overflows. But here&amp;rsquo;s the thing: Rust gives you an escape hatch called &lt;code&gt;unsafe&lt;/code&gt;, and when it&amp;rsquo;s used incorrectly, it reintroduces every single vulnerability that Rust was designed to prevent. The more I dug into real-world Rust codebases, the more I found this happening. Beyond &lt;code&gt;unsafe&lt;/code&gt;, Rust has its own quirky set of security pitfalls: integer overflow behaviour that differs between debug and release builds, FFI boundaries that trust C code unconditionally, and logic errors that the type system simply cannot catch. In this post, I want to walk through the Rust-specific anti-patterns that break the safety promise.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Onboarding That Goes Beyond Setting Up a Dev Environment</title>
      <link>https://www.secdev.uk/blog/leadership/3.3-onboarding-that-goes-beyond-setting-up-a-dev-environment/</link>
      <pubDate>Mon, 29 Sep 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/3.3-onboarding-that-goes-beyond-setting-up-a-dev-environment/</guid>
      <description>&lt;p&gt;At one company I joined, my onboarding consisted of a laptop, a Confluence page titled &amp;ldquo;Getting Started,&amp;rdquo; and a Slack message saying &amp;ldquo;let us know if you need anything.&amp;rdquo; At another, I had a structured first week with an assigned buddy, a schedule of introductions, and a 30/60/90-day plan. The difference in how quickly I became effective was enormous, and it had almost nothing to do with the technical setup.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go Security: Goroutines, Error Handling, and Hidden Bugs</title>
      <link>https://www.secdev.uk/blog/technology/2025-09-27-go-security-goroutines-errors/</link>
      <pubDate>Sat, 27 Sep 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-09-27-go-security-goroutines-errors/</guid>
      <description>&lt;p&gt;Go&amp;rsquo;s simplicity is its greatest strength and, I&amp;rsquo;d argue, its most dangerous security property. The language has no exceptions, no generics-based abstractions (until recently), and no implicit behaviour, everything is explicit. But that explicitness creates its own class of vulnerabilities: unchecked errors that silently skip security validation, goroutine races on shared state, HTTP client defaults that follow redirects into internal networks, and string handling patterns that bypass input validation. In this post, I want to walk through the Go-specific anti-patterns that lead to security vulnerabilities, from the error that nobody checked to the goroutine that corrupted the authentication cache. The more I dug into Go&amp;rsquo;s security landscape, the more I realised these bugs are subtle precisely because the language feels so straightforward.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Java Security: From Spring Boot Misconfigs to Deserialization</title>
      <link>https://www.secdev.uk/blog/technology/2025-09-13-java-security-spring-boot-deserialization/</link>
      <pubDate>Sat, 13 Sep 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-09-13-java-security-spring-boot-deserialization/</guid>
      <description>&lt;p&gt;Java has this reputation for being &amp;ldquo;safe&amp;rdquo; because of its type system, managed memory, and mature ecosystem. The more I&amp;rsquo;ve dug into Java security, the more I think that reputation is misleading, and honestly, a bit dangerous. Java&amp;rsquo;s security pitfalls aren&amp;rsquo;t about buffer overflows or memory corruption. They&amp;rsquo;re about the language&amp;rsquo;s powerful runtime features: deserialization, reflection, JNDI lookups, expression languages, and the Spring framework&amp;rsquo;s convention-over-configuration philosophy that silently enables dangerous defaults. In this post I want to walk through the Java-specific anti-patterns that lead to remote code execution, data leaks, and authentication bypasses, from the classic &lt;code&gt;ObjectInputStream&lt;/code&gt; gadget chain to the Spring Boot actuator endpoint that can expose entire environments.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Hiring for Values, Abilities, Then Skills</title>
      <link>https://www.secdev.uk/blog/leadership/3.2-hiring-for-values-abilities-then-skills/</link>
      <pubDate>Mon, 08 Sep 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/3.2-hiring-for-values-abilities-then-skills/</guid>
      <description>&lt;p&gt;Most engineering interviews are backwards. We spend the majority of our time evaluating technical skills, the thing that&amp;rsquo;s easiest to teach and quickest to develop, and almost no time evaluating values and abilities, which are much harder to change and far more predictive of long-term success.&lt;/p&gt;&#xA;&lt;p&gt;I&amp;rsquo;ve made this mistake myself, repeatedly. I&amp;rsquo;ve hired brilliant engineers who turned out to be terrible collaborators. I&amp;rsquo;ve passed on candidates who weren&amp;rsquo;t the strongest technically but would have been exactly what the team needed. The more I&amp;rsquo;ve hired across start-ups and corporates, the more convinced I&amp;rsquo;ve become that we need to flip the order.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Python Security Pitfalls Every Developer Should Know</title>
      <link>https://www.secdev.uk/blog/technology/2025-08-30-python-security-pitfalls/</link>
      <pubDate>Sat, 30 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-08-30-python-security-pitfalls/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve spent a lot of time reviewing Python codebases, and the language&amp;rsquo;s readability and rapid development cycle are exactly what make it dangerous. Python is the default choice for web services, data pipelines, and automation scripts, and that same ease of use hides security pitfalls that experienced developers walk into regularly. The language&amp;rsquo;s dynamic nature, runtime evaluation, duck typing, implicit conversions, and powerful serialization, creates attack surfaces that simply don&amp;rsquo;t exist in statically typed languages. In this post, I want to cover the Python-specific anti-patterns that lead to real vulnerabilities, from the well-known &lt;code&gt;pickle&lt;/code&gt; deserialization trap to the subtle template injection that can survive code review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>What Actually Makes an Engineering Team Effective</title>
      <link>https://www.secdev.uk/blog/leadership/3.1-what-actually-makes-an-engineering-team-effective/</link>
      <pubDate>Mon, 18 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/3.1-what-actually-makes-an-engineering-team-effective/</guid>
      <description>&lt;p&gt;I spent the early part of my career believing that team effectiveness was mostly about talent. Get the best engineers, give them interesting problems, and get out of the way. It&amp;rsquo;s an appealing theory, and it&amp;rsquo;s wrong, or at least, it&amp;rsquo;s missing the most important part.&lt;/p&gt;&#xA;&lt;p&gt;Google&amp;rsquo;s Project Aristotle studied 180 teams and ran 35 statistical models to find what made some teams effective and others not. The finding that surprised everyone, including Google, was that who was on the team mattered far less than how the team worked together. Individual talent, seniority, even team size, none of these were the primary drivers.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Race Conditions</title>
      <link>https://www.secdev.uk/blog/technology/2025-08-16-race-conditions/</link>
      <pubDate>Sat, 16 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-08-16-race-conditions/</guid>
      <description>&lt;p&gt;Race conditions (CWE-362) are, in my opinion, the most insidious class of security bugs you&amp;rsquo;ll encounter. They occur when the behaviour of a program depends on the relative timing of concurrent operations, and at least one of those operations modifies shared state. The window between a check and a subsequent use of the checked value, the classic time-of-check to time-of-use (TOCTOU) pattern, is the most exploited form, but races also show up in counter increments, balance updates, session management, and file operations. What makes race conditions uniquely dangerous is their non-determinism: the bug may not manifest in thousands of test runs, then appear under production load when two requests arrive within microseconds of each other. I want to walk through race conditions in Python, Go, Java, and Rust, from the obvious unprotected counter to the subtle channel-based ordering assumption that passes every test but fails under contention.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Integer Overflow</title>
      <link>https://www.secdev.uk/blog/technology/2025-08-02-integer-overflow/</link>
      <pubDate>Sat, 02 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-08-02-integer-overflow/</guid>
      <description>&lt;p&gt;Integer overflow (CWE-190) is one of those bugs that I find endlessly fascinating because of how quietly destructive it is. It happens when an arithmetic operation produces a value that exceeds the maximum (or falls below the minimum) representable value for the integer type. In C and C++, signed integer overflow is undefined behaviour, the compiler is free to assume it never happens, and optimizations built on that assumption can eliminate bounds checks entirely. Unsigned overflow wraps around silently. Go and Java define overflow as wrapping (two&amp;rsquo;s complement), which prevents undefined behaviour but still produces incorrect results that lead to security vulnerabilities: undersized allocations, bypassed length checks, and negative indices into arrays. Rust panics on overflow in debug mode but wraps in release mode by default, creating a gap between testing and production behaviour that caught me off guard when I first started digging into Rust&amp;rsquo;s safety guarantees. I want to walk through integer overflow across C, C++, Rust, Go, and Java, from the textbook multiplication overflow to the subtle cast truncation that can survive expert review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Accidental Manager: Leading When You Didn&#39;t Plan To</title>
      <link>https://www.secdev.uk/blog/leadership/2.10-the-accidental-manager/</link>
      <pubDate>Mon, 28 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/2.10-the-accidental-manager/</guid>
      <description>&lt;p&gt;Not every manager chose to be one. Sometimes the team grows around you and suddenly you&amp;rsquo;re the most senior person. Sometimes your manager leaves and someone needs to fill the gap. Sometimes the company is too small to hire a dedicated manager, and you&amp;rsquo;re the engineer who seems most capable of doing it alongside your technical work.&lt;/p&gt;&#xA;&lt;p&gt;However it happens, you find yourself managing people without ever having made a deliberate decision to pursue management. And the question you&amp;rsquo;re left with is: now what?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Null Pointer Dereference</title>
      <link>https://www.secdev.uk/blog/technology/2025-07-19-null-pointer-dereference/</link>
      <pubDate>Sat, 19 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-07-19-null-pointer-dereference/</guid>
      <description>&lt;p&gt;Null pointer dereference (CWE-476) is one of those bugs that shows up across every language, and the more I researched it for this post, the more I was struck by how much damage it can do depending on context. The consequences vary dramatically: C programs crash with a segfault (or worse, the kernel maps page zero and an attacker gets code execution), C++ invokes undefined behaviour that the compiler may optimise into literally anything, Go panics with a nil pointer dereference that kills the goroutine or the whole program, and Java throws a &lt;code&gt;NullPointerException&lt;/code&gt; that can crash the app or leak stack traces to an attacker. MITRE ranks CWE-476 consistently in the top 25 most dangerous software weaknesses, and digging into the CVE data, that ranking is well deserved. I want to walk through C, C++, Go, and Java here, from the obvious unchecked &lt;code&gt;malloc&lt;/code&gt; return to the subtle nil interface trap in Go and the conditional path where null silently propagates through multiple function calls.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Java Security Code Review Guide</title>
      <link>https://www.secdev.uk/blog/articles/java_security_review_guide/</link>
      <pubDate>Wed, 09 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/articles/java_security_review_guide/</guid>
      <description>&lt;h2 id=&#34;1-introduction&#34;&gt;1. Introduction&lt;/h2&gt;&#xA;&lt;p&gt;I put this guide together as a structured approach to security-focused code review for Java applications. Whether you&amp;rsquo;re just starting to identify security vulnerabilities in Java code or you&amp;rsquo;re an experienced developer looking for a language-specific checklist, I&amp;rsquo;ve tried to make it useful at both levels.&lt;/p&gt;&#xA;&lt;p&gt;Java&amp;rsquo;s strong type system, managed memory, and mature ecosystem offer many safety guarantees, but the more I researched Java security, the more I realised the language and its frameworks still expose developers to a wide range of pitfalls including injection, deserialisation, reflection abuse, JNDI injection, and misconfigured XML parsers. What follows covers manual review strategies, common anti-patterns, recommended tooling, and vulnerability patterns organised by class, with cross-references to the intentionally vulnerable examples in this project.&lt;/p&gt;</description>
    </item>
    <item>
      <title>When You Miss Writing Code</title>
      <link>https://www.secdev.uk/blog/leadership/2.9-when-you-miss-writing-code/</link>
      <pubDate>Mon, 07 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/2.9-when-you-miss-writing-code/</guid>
      <description>&lt;p&gt;There&amp;rsquo;s a particular kind of melancholy that hits engineering managers around the six-month mark. You&amp;rsquo;re in your fourth meeting of the day, you&amp;rsquo;ve just spent an hour on a spreadsheet, and you glance over at your team, heads down, headphones on, deep in the flow state you remember so well. And you think: I miss that.&lt;/p&gt;&#xA;&lt;p&gt;I still feel it. After all these years, there are days when I&amp;rsquo;d trade every meeting on my calendar for four uninterrupted hours with a codebase and a problem to solve. The feeling doesn&amp;rsquo;t go away. But understanding it, what it actually is, and what to do with it, makes it manageable.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Use After Free</title>
      <link>https://www.secdev.uk/blog/technology/2025-07-05-use-after-free/</link>
      <pubDate>Sat, 05 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-07-05-use-after-free/</guid>
      <description>&lt;p&gt;Use-after-free (CWE-416) is one of those bug classes that I wanted to understand deeply because it keeps showing up at the root of high-profile exploits. It occurs when a program continues to use a pointer after the memory it references has been freed. The freed memory may be reallocated for a different purpose, and the dangling pointer now reads or writes data that belongs to a completely different object. Attackers exploit this by controlling what gets allocated into the freed slot, replacing a data buffer with a crafted object that contains a function pointer, then triggering the dangling pointer to call through it. Reading through CVE databases, use-after-free is at the root of hundreds of browser exploits, kernel privilege escalations, and server compromises. This post covers C and C++, from the obvious free-then-use to the subtle shared-pointer aliasing and callback registration patterns that can evade expert review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Out-of-Bounds Writes</title>
      <link>https://www.secdev.uk/blog/technology/2025-06-21-out-of-bounds-writes/</link>
      <pubDate>Sat, 21 Jun 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-06-21-out-of-bounds-writes/</guid>
      <description>&lt;p&gt;Out-of-bounds writes (CWE-787) are the single most dangerous class of memory corruption vulnerabilities on the SANS/CWE Top 25, and they&amp;rsquo;ve held that position for years. The reason is clear once you dig into the mechanics: writing past the end of a buffer can overwrite return addresses, function pointers, vtable entries, and adjacent heap metadata, giving attackers arbitrary code execution. Unlike higher-level languages where the runtime catches array index violations, C and C++ silently corrupt memory, and the consequences may not manifest until thousands of instructions later. Even Rust, with its ownership model, is vulnerable when &lt;code&gt;unsafe&lt;/code&gt; blocks bypass the borrow checker. In this post I&amp;rsquo;ll dissect out-of-bounds writes in C, C++, and Rust, from the classic &lt;code&gt;strcpy&lt;/code&gt; overflow to the subtle off-by-one in pointer arithmetic that can survive expert review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Finding Your Leadership Style</title>
      <link>https://www.secdev.uk/blog/leadership/2.8-finding-your-leadership-style/</link>
      <pubDate>Mon, 16 Jun 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/2.8-finding-your-leadership-style/</guid>
      <description>&lt;p&gt;For years, I tried to lead like other people. I&amp;rsquo;d read about a leader I admired, adopt their approach, and wonder why it felt forced. It took me longer than I&amp;rsquo;d like to admit to realise that leadership style isn&amp;rsquo;t something you copy, it&amp;rsquo;s something you discover through practice, reflection, and a fair amount of getting it wrong.&lt;/p&gt;&#xA;&lt;p&gt;The good news is that there&amp;rsquo;s no single correct way to lead an engineering team. The bad news is that your natural style, whatever it is, won&amp;rsquo;t be sufficient for every situation. The best leaders I&amp;rsquo;ve worked with aren&amp;rsquo;t the ones with the most charismatic style, they&amp;rsquo;re the ones who can shift between styles depending on what the moment requires.&lt;/p&gt;</description>
    </item>
    <item>
      <title>SSRF</title>
      <link>https://www.secdev.uk/blog/technology/2025-06-07-ssrf/</link>
      <pubDate>Sat, 07 Jun 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-06-07-ssrf/</guid>
      <description>&lt;p&gt;Server-Side Request Forgery is one of those vulnerability classes that I&amp;rsquo;ve grown to respect more and more the deeper I dig into it. The idea is simple, you trick a server into making HTTP requests to destinations you choose, turning it into your personal proxy. It can reach internal services, cloud metadata endpoints, and private networks that you&amp;rsquo;d never touch directly from the outside. OWASP gave SSRF its own category (A10) in 2021, and reading through the rationale, it was overdue. The case studies are striking, a single SSRF against &lt;code&gt;http://169.254.169.254/&lt;/code&gt; on AWS can leak IAM credentials and compromise an entire account. In this post, I&amp;rsquo;ll walk through Python, Java, Go, and JavaScript examples, from the textbook URL-in-a-parameter to the subtle redirect-chain and DNS rebinding variants that make SSRF so hard to defend against.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Trap of Doing Two Jobs</title>
      <link>https://www.secdev.uk/blog/leadership/2.7-the-trap-of-doing-two-jobs/</link>
      <pubDate>Mon, 26 May 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/2.7-the-trap-of-doing-two-jobs/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve fallen into this trap more than once. You get promoted, you take on the new responsibilities, and you keep doing the old ones too. Not because anyone asked you to, but because the old work is familiar, you&amp;rsquo;re good at it, and there&amp;rsquo;s a voice in your head saying &amp;ldquo;if I don&amp;rsquo;t do this, it won&amp;rsquo;t get done right.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;That voice is wrong. Or rather, it might be right in the short term, but it&amp;rsquo;s catastrophically wrong in the medium term. Doing two jobs doesn&amp;rsquo;t make you indispensable, it makes you a bottleneck, and it prevents your team from growing into the space you&amp;rsquo;re supposed to have vacated.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Logging Failures</title>
      <link>https://www.secdev.uk/blog/technology/2025-05-24-logging-failures/</link>
      <pubDate>Sat, 24 May 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-05-24-logging-failures/</guid>
      <description>&lt;p&gt;When I started researching logging failures for this post, I expected to find dramatic exploit chains. Instead, what I found was something more unsettling, the absence of evidence. The most frustrating thing about incident response isn&amp;rsquo;t finding a sophisticated exploit; it&amp;rsquo;s opening the log aggregator and finding nothing. No entries, no breadcrumbs, no evidence that anything happened at all. That&amp;rsquo;s CWE-778 (Insufficient Logging), and it&amp;rsquo;s the backbone of OWASP A09: Security Logging and Monitoring Failures. This isn&amp;rsquo;t a crash or a data leak in the traditional sense; it&amp;rsquo;s the absence of evidence. When your incident response team can&amp;rsquo;t investigate what was never recorded, the attacker wins by default. In this post, I&amp;rsquo;m going to walk through logging failures across Python, Java, and Go, from the obvious missing-log-statement to the subtle cases where logging exists but captures the wrong data, at the wrong level, or silently drops events under load.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Integrity Failures</title>
      <link>https://www.secdev.uk/blog/technology/2025-05-10-integrity-failures/</link>
      <pubDate>Sat, 10 May 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-05-10-integrity-failures/</guid>
      <description>&lt;p&gt;Integrity failures happen when an application trusts data or code that hasn&amp;rsquo;t been verified, and they can lead to some of the most devastating compromises out there. OWASP A08 covers two patterns I find particularly fascinating: unsafe deserialization (CWE-502), where untrusted data is fed into a deserializer that can execute arbitrary code, and inclusion of functionality from untrusted sources (CWE-829), where the application loads and runs code from URLs, plugins, or scripts without integrity checks. Both patterns share a root cause, the application assumes that incoming data or code is benign. In this post I&amp;rsquo;ll walk through Python, Java, JavaScript, and Go, from the textbook &lt;code&gt;pickle.loads()&lt;/code&gt; to the subtle VM sandbox escapes that can survive expert review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Managing Former Peers</title>
      <link>https://www.secdev.uk/blog/leadership/2.6-managing-former-peers/</link>
      <pubDate>Mon, 05 May 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/2.6-managing-former-peers/</guid>
      <description>&lt;p&gt;There&amp;rsquo;s a particular kind of awkwardness that comes with being promoted above people you used to sit next to as equals. Yesterday you were peers, debating technical approaches over coffee. Today you&amp;rsquo;re their boss, responsible for their performance reviews, their career development, and potentially their continued employment. The relationship has fundamentally changed, and pretending otherwise is the worst thing you can do.&lt;/p&gt;&#xA;&lt;p&gt;I&amp;rsquo;ve been on both sides of this, promoted above peers, and managed by someone who used to be my equal. Neither side is comfortable, and both require deliberate effort to navigate well.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Authentication Failures</title>
      <link>https://www.secdev.uk/blog/technology/2025-04-26-authentication-failures/</link>
      <pubDate>Sat, 26 Apr 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-04-26-authentication-failures/</guid>
      <description>&lt;p&gt;Authentication is the front door of every application, and OWASP A07 documents how often that door is left unlocked. When I started digging into authentication failures, I realised they go far beyond weak passwords, they encompass hardcoded credentials compiled into binaries, brute-force attacks with no rate limiting, password hashes that can be reversed in seconds, and reset flows that hand tokens directly to attackers. These patterns show up in production regularly, sometimes in the same application. This post covers three CWEs across Python, Java, Go, and Rust: CWE-798 (Use of Hard-Coded Credentials), CWE-287 (Improper Authentication), and CWE-307 (Improper Restriction of Excessive Authentication Attempts).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Your First 90 Days as an Engineering Leader</title>
      <link>https://www.secdev.uk/blog/leadership/2.5-your-first-90-days-as-an-engineering-leader/</link>
      <pubDate>Mon, 14 Apr 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/2.5-your-first-90-days-as-an-engineering-leader/</guid>
      <description>&lt;p&gt;The first 90 days in a new leadership role are simultaneously the most important and the most disorienting period of your tenure. Everything feels urgent. Everyone has opinions about what you should focus on. And the temptation to prove yourself by making immediate changes is almost irresistible.&lt;/p&gt;&#xA;&lt;p&gt;Having started new leadership roles at both start-ups and large corporates, I can tell you that the single most valuable thing you can do in those first 90 days is slow down. Not because urgency isn&amp;rsquo;t real, but because acting on incomplete understanding almost always creates more problems than it solves.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Vulnerable Components</title>
      <link>https://www.secdev.uk/blog/technology/2025-04-12-vulnerable-components/</link>
      <pubDate>Sat, 12 Apr 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-04-12-vulnerable-components/</guid>
      <description>&lt;p&gt;Your application is only as secure as its least-maintained dependency, and this is one of those lessons that really sinks in once you start digging into dependency trees. OWASP A06 (Vulnerable and Outdated Components) covers the reality that most modern applications are more dependency code than application code, and a single outdated library can undermine every security measure you&amp;rsquo;ve built. CWE-1104 captures this: the use of unmaintained third-party components with known vulnerabilities. In this post I&amp;rsquo;ll walk through real dependency chains in Python, Java, and JavaScript, from the Log4Shell-level disasters that make headlines to the subtle version pins that quietly accumulate CVEs while nobody&amp;rsquo;s watching.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Security Misconfiguration</title>
      <link>https://www.secdev.uk/blog/technology/2025-03-29-security-misconfiguration/</link>
      <pubDate>Sat, 29 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-03-29-security-misconfiguration/</guid>
      <description>&lt;p&gt;Security misconfiguration is the vulnerability class that really drove home for me why secure defaults matter more than secure documentation. OWASP A05 covers the gap between what a framework &lt;em&gt;can&lt;/em&gt; do securely and how developers actually configure it. Debug mode left on in production. CORS wide open. XML parsers that resolve external entities. Settings endpoints with no authentication. These aren&amp;rsquo;t coding mistakes, they&amp;rsquo;re configuration mistakes, and they show up everywhere. In this post I&amp;rsquo;ll walk through Python, Java, Go, and JavaScript examples covering CWE-16 (Improper Configuration) and CWE-611 (XML External Entity Processing), from the flags that any reviewer would catch to the subtle combinations that can survive months in production.&lt;/p&gt;</description>
    </item>
    <item>
      <title>C&#43;&#43; Security Code Review Guide</title>
      <link>https://www.secdev.uk/blog/articles/cpp_security_review_guide/</link>
      <pubDate>Mon, 24 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/articles/cpp_security_review_guide/</guid>
      <description>&lt;h2 id=&#34;1-introduction&#34;&gt;1. Introduction&lt;/h2&gt;&#xA;&lt;p&gt;I put this guide together as a structured approach to security-focused code review for C++ applications. Whether you&amp;rsquo;re just starting to identify security vulnerabilities in C++ code or you&amp;rsquo;re an experienced developer looking for a language-specific checklist, I&amp;rsquo;ve tried to make it useful at both levels.&lt;/p&gt;&#xA;&lt;p&gt;C++ inherits many of C&amp;rsquo;s low-level risks, manual memory management, pointer arithmetic, and the absence of bounds checking, while adding its own layer of complexity through classes, templates, smart pointers, the STL, RAII, and operator overloading. What I find fascinating about C++ security is the duality: when used correctly, features like &lt;code&gt;std::unique_ptr&lt;/code&gt;, &lt;code&gt;std::vector&lt;/code&gt;, and RAII can eliminate entire vulnerability classes. When misused, they create subtle bugs that are harder to spot than their C equivalents, dangling references from moved-from objects, use-after-free through raw pointer aliases to smart-pointer-managed memory, iterator invalidation, and implicit conversions in template code. What follows covers manual review strategies, common anti-patterns, recommended tooling, and vulnerability patterns organised by class, with cross-references to the intentionally vulnerable examples in this project.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Learning to Delegate (and Why It Feels So Wrong)</title>
      <link>https://www.secdev.uk/blog/leadership/2.4-learning-to-delegate/</link>
      <pubDate>Mon, 24 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/2.4-learning-to-delegate/</guid>
      <description>&lt;p&gt;Early in my management career, I had a team of six engineers and I was still the person who knew the most about every part of the system. When something urgent came up, I&amp;rsquo;d fix it myself. When a design decision needed making, I&amp;rsquo;d make it. When a PR needed reviewing, I&amp;rsquo;d review it. I was efficient, responsive, and completely unsustainable.&lt;/p&gt;&#xA;&lt;p&gt;It took a holiday, a proper two-week break where I was genuinely unreachable, for me to see the problem. The team didn&amp;rsquo;t fall apart while I was away. But they didn&amp;rsquo;t move forward either. They&amp;rsquo;d been waiting for me on three separate decisions, and nobody felt empowered to make them without my input. I&amp;rsquo;d created a team that was dependent on me, and I&amp;rsquo;d done it by being &amp;ldquo;helpful.&amp;rdquo;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Insecure Design</title>
      <link>https://www.secdev.uk/blog/technology/2025-03-15-insecure-design/</link>
      <pubDate>Sat, 15 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-03-15-insecure-design/</guid>
      <description>&lt;p&gt;Insecure design is the vulnerability class that fascinates me the most, because no amount of perfect implementation can fix it. It lives in the architecture, the data flow, the decisions made before anyone wrote a line of code. OWASP A04 captures something that shows up again and again in real-world applications: systems that are insecure by design, not because of a coding mistake, but because the system was never designed to be secure in the first place. In this post, I want to focus on two of the most common manifestations: verbose error messages that leak internal details (CWE-209) and insufficiently protected credentials (CWE-522). I&amp;rsquo;ll walk through Python, Java, and JavaScript examples that range from the immediately obvious to the patterns that, from what I&amp;rsquo;ve seen in code reviews, can survive months without being caught.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Staying Technical as You Move Up</title>
      <link>https://www.secdev.uk/blog/leadership/2.3-staying-technical-as-you-move-up/</link>
      <pubDate>Mon, 03 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/2.3-staying-technical-as-you-move-up/</guid>
      <description>&lt;p&gt;When I moved into my first management role, a more senior leader told me: &amp;ldquo;You&amp;rsquo;ll stop writing code within six months.&amp;rdquo; He said it like it was inevitable, a law of nature. I took it as a challenge. Twenty-five years later, I still write code. Not as much as I used to, and not the same kind, but I&amp;rsquo;ve never fully let go. And I think that&amp;rsquo;s been one of the most important decisions of my career.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Cryptographic Failures That Pass Code Review</title>
      <link>https://www.secdev.uk/blog/technology/2025-03-01-cryptographic-failures-that-pass-code-review/</link>
      <pubDate>Sat, 01 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-03-01-cryptographic-failures-that-pass-code-review/</guid>
      <description>&lt;p&gt;Cryptographic code is uniquely dangerous, and it&amp;rsquo;s one of the areas I find most challenging to review. The reason is simple: it can be completely wrong and still appear to work perfectly. A broken hash function still produces a hash. A weak cipher still encrypts and decrypts. A predictable random number generator still generates numbers. The application runs, tests pass, and the vulnerability sits quietly until an attacker exploits it. In this post, I want to walk through the cryptographic failures that routinely survive code review across Python, Java, Go, and Rust, from the obvious use of MD5 to the subtle misuse of otherwise strong primitives.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Broken Access Control</title>
      <link>https://www.secdev.uk/blog/technology/2025-02-15-broken-access-control/</link>
      <pubDate>Sat, 15 Feb 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-02-15-broken-access-control/</guid>
      <description>&lt;p&gt;Broken access control sits at the top of the OWASP Top 10 for good reason, and it&amp;rsquo;s the vulnerability class I find most fascinating to research. It&amp;rsquo;s the most common serious vulnerability in modern web applications, and it&amp;rsquo;s almost entirely a logic problem, no amount of input sanitization or encryption fixes it. The application simply fails to verify that the authenticated user is authorized to perform the requested action on the requested resource. In this post, I want to walk through the patterns that show up across Python, Java, and Go, from the IDOR that any pentester would find in minutes to the subtle authorization gaps that can survive months of code review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>What Nobody Tells You About Being a Tech Lead</title>
      <link>https://www.secdev.uk/blog/leadership/2.2-what-nobody-tells-you-about-being-a-tech-lead/</link>
      <pubDate>Mon, 10 Feb 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/2.2-what-nobody-tells-you-about-being-a-tech-lead/</guid>
      <description>&lt;p&gt;The tech lead role is one of the most misunderstood positions in engineering. It&amp;rsquo;s not a title, it&amp;rsquo;s a set of responsibilities. It&amp;rsquo;s not a promotion, it&amp;rsquo;s a lateral move into a fundamentally different kind of work. And it&amp;rsquo;s where most engineers first discover whether they enjoy leadership, or whether they&amp;rsquo;d rather stay deep in the code.&lt;/p&gt;&#xA;&lt;p&gt;I&amp;rsquo;ve been a tech lead multiple times across my career, and I still think it&amp;rsquo;s the hardest role in engineering. Not because the problems are the most complex technically, but because you&amp;rsquo;re pulled in three directions simultaneously and nobody tells you how to balance them.&lt;/p&gt;</description>
    </item>
    <item>
      <title>C Security Code Review Guide</title>
      <link>https://www.secdev.uk/blog/articles/c_security_review_guide/</link>
      <pubDate>Tue, 04 Feb 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/articles/c_security_review_guide/</guid>
      <description>&lt;h2 id=&#34;1-introduction&#34;&gt;1. Introduction&lt;/h2&gt;&#xA;&lt;p&gt;I put this guide together as a structured approach to security-focused code review for C applications. Whether you&amp;rsquo;re just starting to identify security vulnerabilities in C code or you&amp;rsquo;re an experienced developer looking for a language-specific checklist, I&amp;rsquo;ve tried to make it useful at both levels.&lt;/p&gt;&#xA;&lt;p&gt;C&amp;rsquo;s manual memory management, lack of bounds checking, direct pointer arithmetic, and minimal runtime safety make it uniquely prone to entire classes of vulnerabilities that higher-level languages prevent by design. The more I dug into C codebases, the more I appreciated just how many ways things can go wrong, buffer overflows, use-after-free, null pointer dereferences, integer overflows, and format string attacks are all first-class concerns. What follows covers manual review strategies, common anti-patterns, recommended tooling, and vulnerability patterns organised by class, with cross-references to the intentionally vulnerable examples in this project.&lt;/p&gt;</description>
    </item>
    <item>
      <title>XSS Is Not Just a JavaScript Problem</title>
      <link>https://www.secdev.uk/blog/technology/2025-02-01-xss-is-not-just-a-javascript-problem/</link>
      <pubDate>Sat, 01 Feb 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-02-01-xss-is-not-just-a-javascript-problem/</guid>
      <description>&lt;p&gt;Cross-site scripting gets framed as a front-end problem a lot, something that happens in JavaScript and gets fixed with JavaScript. But the more I dug into this, the clearer it became that XSS vulnerabilities almost always originate on the server side, in whatever language is generating the HTML. I&amp;rsquo;ve found XSS in Python templates, Java JSPs, Go&amp;rsquo;s &lt;code&gt;html/template&lt;/code&gt; misuse, Rust web frameworks, and server-rendered JavaScript. The language you write your backend in determines which XSS patterns you&amp;rsquo;ll run into and which ones will sneak past your review.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Identity Crisis of Becoming a Manager</title>
      <link>https://www.secdev.uk/blog/leadership/2.1-the-identity-crisis-of-becoming-a-manager/</link>
      <pubDate>Mon, 20 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/2.1-the-identity-crisis-of-becoming-a-manager/</guid>
      <description>&lt;p&gt;The day I officially became a manager, I still wrote code. The day after that, I still wrote code. For weeks, I kept writing code, attending the same standups, reviewing the same PRs, and fitting &amp;ldquo;management stuff&amp;rdquo; into the gaps. It took me an embarrassingly long time to realise that I hadn&amp;rsquo;t actually changed what I was doing. I&amp;rsquo;d just added a new title to the old job.&lt;/p&gt;&#xA;&lt;p&gt;That&amp;rsquo;s the trap, and nearly everyone falls into it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Command Injection Beyond os.system</title>
      <link>https://www.secdev.uk/blog/technology/2025-01-18-command-injection-beyond-os-system/</link>
      <pubDate>Sat, 18 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-01-18-command-injection-beyond-os-system/</guid>
      <description>&lt;p&gt;When most developers hear &amp;ldquo;command injection,&amp;rdquo; they think of &lt;code&gt;os.system()&lt;/code&gt; in Python or &lt;code&gt;Runtime.exec()&lt;/code&gt; in Java. Those are the textbook examples, and most teams know to avoid them. But the more I researched this topic, the more I realised that command injection surfaces through dozens of less obvious APIs across every language, subprocess pipes, shell expansions, backtick operators, and even seemingly safe exec functions that become dangerous with the wrong arguments. This is one of my favourite vulnerability classes to dig into because the attack surface is so much wider than people realise. Let me walk you through command injection patterns across seven languages, from the obvious to the genuinely subtle.&lt;/p&gt;</description>
    </item>
    <item>
      <title>SQL Injection Across Languages</title>
      <link>https://www.secdev.uk/blog/technology/2025-01-04-sql-injection-across-languages/</link>
      <pubDate>Sat, 04 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-01-04-sql-injection-across-languages/</guid>
      <description>&lt;p&gt;SQL injection is one of those vulnerability classes that refuses to go away, no matter how much the industry talks about it. I&amp;rsquo;ve been digging into how it manifests across different languages, Python, Java, Go, and JavaScript, and the root cause is always the same: untrusted input reaches a SQL query without proper parameterization. But the way developers introduce it varies wildly depending on the framework, ORM, and idioms of each language. In this post, I want to walk through real examples across these four languages, showing both the obvious patterns that any reviewer would catch and the subtle ones that slip through code review more often than you&amp;rsquo;d expect.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Loneliness of Technical Leadership (and What to Do About It)</title>
      <link>https://www.secdev.uk/blog/leadership/1.10-the-loneliness-of-technical-leadership/</link>
      <pubDate>Mon, 30 Dec 2024 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/1.10-the-loneliness-of-technical-leadership/</guid>
      <description>&lt;p&gt;Nobody warns you about this part. You get promoted, you take on more responsibility, you start leading people, and somewhere along the way, you realise you have fewer people you can be genuinely honest with. The higher you go, the smaller that circle gets.&lt;/p&gt;&#xA;&lt;p&gt;I&amp;rsquo;ve felt this at different points across my career, at a start-up where I was the only technical leader, and at a large corporate where I was surrounded by peers but couldn&amp;rsquo;t always be candid about what was going wrong. The loneliness of leadership isn&amp;rsquo;t about being physically alone. It&amp;rsquo;s about the growing gap between what you&amp;rsquo;re carrying and who you can share it with.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Building Diverse Teams That Actually Work</title>
      <link>https://www.secdev.uk/blog/leadership/1.9-building-diverse-teams-that-actually-work/</link>
      <pubDate>Mon, 09 Dec 2024 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/1.9-building-diverse-teams-that-actually-work/</guid>
      <description>&lt;p&gt;This is one of those topics where I&amp;rsquo;ve watched the conversation evolve significantly over the course of my career. Twenty-five years ago, &amp;ldquo;diversity&amp;rdquo; in tech largely meant making sure the team photo didn&amp;rsquo;t look &lt;em&gt;entirely&lt;/em&gt; homogeneous. The bar was low, and most of us, myself included, didn&amp;rsquo;t think critically enough about what we were missing. The more I&amp;rsquo;ve researched this topic and reflected on my own experience leading teams of different shapes and sizes, the more I&amp;rsquo;ve come to believe that diversity isn&amp;rsquo;t just the right thing to do. It&amp;rsquo;s the thing that makes teams actually work better. But only if you pair it with genuine inclusion. Without that, it&amp;rsquo;s just optics.&lt;/p&gt;</description>
    </item>
    <item>
      <title>How to Have Difficult Conversations</title>
      <link>https://www.secdev.uk/blog/leadership/1.8-how-to-have-difficult-conversations/</link>
      <pubDate>Mon, 18 Nov 2024 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/1.8-how-to-have-difficult-conversations/</guid>
      <description>&lt;p&gt;There&amp;rsquo;s a particular kind of dread that settles in the night before you know you have to have a difficult conversation. You rehearse it in the shower. You draft opening lines in your head while making coffee. You tell yourself it&amp;rsquo;ll be fine, and then you spend the rest of the morning hoping the other person calls in sick.&lt;/p&gt;&#xA;&lt;p&gt;I&amp;rsquo;ve been having difficult conversations as a manager for a long time now, and I want to be honest about something: they don&amp;rsquo;t get easier. What changes is that you get better at having them. You learn to sit with the discomfort rather than rushing through it. You learn that the conversation you&amp;rsquo;re dreading is almost never as bad as the one you&amp;rsquo;ve been having with yourself about it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Communication Is a Craft, Not a Soft Skill</title>
      <link>https://www.secdev.uk/blog/leadership/1.7-communication-is-a-craft-not-a-soft-skill/</link>
      <pubDate>Mon, 28 Oct 2024 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/1.7-communication-is-a-craft-not-a-soft-skill/</guid>
      <description>&lt;p&gt;For most of my career as an IC, the code did the talking. If the system worked, the message was clear. If the tests passed, the argument was won. Communication meant writing commit messages, maybe the occasional design document, and turning up to stand-up with something coherent to say.&lt;/p&gt;&#xA;&lt;p&gt;Then I moved into leadership, and I discovered something uncomfortable: the thing I&amp;rsquo;d spent the least time deliberately practising was suddenly the thing I needed to be best at. Not architecture. Not debugging. Communication.&lt;/p&gt;</description>
    </item>
    <item>
      <title>JavaScript Security Code Review Guide</title>
      <link>https://www.secdev.uk/blog/articles/javascript_security_review_guide/</link>
      <pubDate>Wed, 09 Oct 2024 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/articles/javascript_security_review_guide/</guid>
      <description>&lt;h2 id=&#34;1-introduction&#34;&gt;1. Introduction&lt;/h2&gt;&#xA;&lt;p&gt;I put this guide together as a structured approach to security-focused code review for JavaScript and Node.js applications. Whether you&amp;rsquo;re just starting to identify security vulnerabilities in JavaScript code or you&amp;rsquo;re an experienced developer looking for a language-specific checklist, I&amp;rsquo;ve tried to make it useful at both levels.&lt;/p&gt;&#xA;&lt;p&gt;JavaScript&amp;rsquo;s dynamic typing, prototype-based inheritance, single-threaded event loop with async concurrency, and the vast npm ecosystem make it enormously productive, but the more I dug into JavaScript security, the more I realised these same qualities introduce security pitfalls that static analysis alone cannot always catch. What follows covers manual review strategies, common anti-patterns, recommended tooling, and vulnerability patterns organised by class, with cross-references to the intentionally vulnerable examples in this project.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Brilliant Jerk Problem</title>
      <link>https://www.secdev.uk/blog/leadership/1.6-the-brilliant-jerk-problem/</link>
      <pubDate>Mon, 07 Oct 2024 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/1.6-the-brilliant-jerk-problem/</guid>
      <description>&lt;p&gt;There&amp;rsquo;s a conversation that every engineering leader has at some point. It usually starts with something like: &amp;ldquo;Yeah, they&amp;rsquo;re difficult, but they&amp;rsquo;re the only person who understands the billing system.&amp;rdquo; Or: &amp;ldquo;I know people find them hard to work with, but their output is incredible.&amp;rdquo; You nod along, because you&amp;rsquo;ve probably said something similar yourself. I certainly have.&lt;/p&gt;&#xA;&lt;p&gt;The brilliant jerk problem is one of those leadership challenges that feels genuinely hard the first time you face it, and blindingly obvious in hindsight. Someone on your team produces exceptional technical work, maybe they&amp;rsquo;re the fastest coder, the one who understands the gnarliest parts of the system, the person who can debug anything. But they also make people feel small. They dismiss ideas with contempt. They create an atmosphere where others are afraid to speak up, ask questions, or challenge anything.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Managing Conflict Without Avoiding It</title>
      <link>https://www.secdev.uk/blog/leadership/1.5-managing-conflict-without-avoiding-it/</link>
      <pubDate>Mon, 16 Sep 2024 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/1.5-managing-conflict-without-avoiding-it/</guid>
      <description>&lt;p&gt;I used to think I was good at managing conflict. What I was actually good at was avoiding it.&lt;/p&gt;&#xA;&lt;p&gt;Early in my management career, I&amp;rsquo;d watch two engineers disagree about an architectural approach and my instinct was to smooth things over. Find the middle ground. Suggest we &amp;ldquo;take it offline.&amp;rdquo; Anything to get past the uncomfortable moment and back to something that felt like progress. I told myself I was being diplomatic. In reality, I was being a coward, and I was making things worse.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Trust as a Leadership Currency</title>
      <link>https://www.secdev.uk/blog/leadership/1.4-trust-as-a-leadership-currency/</link>
      <pubDate>Mon, 26 Aug 2024 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/1.4-trust-as-a-leadership-currency/</guid>
      <description>&lt;p&gt;Early in my management career, I inherited a team that had been through a rough patch. Their previous lead had been technically brilliant but unpredictable, generous with praise one week, dismissive the next. Decisions were made behind closed doors and communicated as faits accomplis. By the time I arrived, the team had learned a very rational behaviour: keep your head down, don&amp;rsquo;t volunteer ideas, and never be the one to deliver bad news.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Giving Feedback That Actually Lands</title>
      <link>https://www.secdev.uk/blog/leadership/1.3-giving-feedback-that-actually-lands/</link>
      <pubDate>Mon, 05 Aug 2024 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/1.3-giving-feedback-that-actually-lands/</guid>
      <description>&lt;p&gt;There&amp;rsquo;s a moment I remember clearly from early in my management career. I&amp;rsquo;d prepared what I thought was a really well-structured piece of feedback for someone on my team. I&amp;rsquo;d thought about the situation, the behaviour, the impact, the whole lot. I delivered it calmly, clearly, and with good intentions. And it landed like a brick through a window.&lt;/p&gt;&#xA;&lt;p&gt;The person shut down. They nodded politely, said &amp;ldquo;okay, thanks,&amp;rdquo; and I could see the shutters come down behind their eyes. Whatever I&amp;rsquo;d intended to communicate, what they&amp;rsquo;d received was something entirely different. That gap, between intending to give helpful feedback and having it actually received that way, is enormous. And I&amp;rsquo;ve learned the hard way, more than once, that closing it takes far more than just getting your words right.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Art of the One-on-One: Beyond Status Updates</title>
      <link>https://www.secdev.uk/blog/leadership/1.2-the-art-of-the-one-on-one-beyond-status-updates/</link>
      <pubDate>Mon, 15 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/1.2-the-art-of-the-one-on-one-beyond-status-updates/</guid>
      <description>&lt;p&gt;In the first article in this series, I wrote about psychological safety, the foundation that makes everything else in leadership possible. If there&amp;rsquo;s one place where that foundation gets tested every single week, it&amp;rsquo;s the one-on-one.&lt;/p&gt;&#xA;&lt;p&gt;I&amp;rsquo;ll be honest: my early one-on-ones were terrible. I&amp;rsquo;d sit down with a report, open Jira, and essentially run a standup for two people. &amp;ldquo;How&amp;rsquo;s the migration going? Any blockers? Cool, see you next week.&amp;rdquo; I thought I was being efficient. What I was actually doing was wasting the most valuable recurring meeting on my calendar, and probably theirs too.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Why Psychological Safety Is the Foundation Everything Else Sits On</title>
      <link>https://www.secdev.uk/blog/leadership/1.1-why-psychological-safety-is-the-foundation-everything-else-sits-on/</link>
      <pubDate>Mon, 24 Jun 2024 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/leadership/1.1-why-psychological-safety-is-the-foundation-everything-else-sits-on/</guid>
      <description>&lt;p&gt;You can have the best engineers, the clearest roadmap, and the most elegant architecture in the world, and none of it will matter if your team doesn&amp;rsquo;t feel safe enough to speak up when something&amp;rsquo;s wrong.&lt;/p&gt;&#xA;&lt;p&gt;I&amp;rsquo;ve been leading engineering teams for over two decades now, across start-ups and large corporates, and if there&amp;rsquo;s one thing I keep coming back to, it&amp;rsquo;s this: the teams that performed best weren&amp;rsquo;t the ones with the most talent. They were the ones where people felt safe enough to be honest. Safe enough to say &amp;ldquo;I don&amp;rsquo;t understand this,&amp;rdquo; or &amp;ldquo;I think we&amp;rsquo;re making a mistake,&amp;rdquo; or &amp;ldquo;I need help.&amp;rdquo;&lt;/p&gt;</description>
    </item>
    <item>
      <title></title>
      <link>https://www.secdev.uk/blog/code/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/code/</guid>
      <description>&lt;h1 id=&#34;source-code&#34;&gt;Source Code&lt;/h1&gt;&#xA;&lt;p&gt;Like most engineers, I write more code than I make public.&#xA;Source code relating to articles on this site is publicly available on &lt;a href=&#34;https://github.com/guyadixon?tab=repositories&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;&#xA;    GitHub&#xA;&lt;/a&gt;&#xA;, along with some earlier projects.&lt;/p&gt;</description>
    </item>
    <item>
      <title></title>
      <link>https://www.secdev.uk/blog/me/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/me/</guid>
      <description>&lt;h1 id=&#34;about-me&#34;&gt;About Me&lt;/h1&gt;&#xA;&lt;p&gt;I&amp;rsquo;m a technology professional with over 25 years&amp;rsquo; experience as both a software engineer, and technology leader. Over the years I&amp;rsquo;ve led technology teams ranging in size from a handful of engineers, to entire technology functions comprising of teams of teams.&lt;/p&gt;&#xA;&lt;p&gt;Over the last decade I&amp;rsquo;ve had a big focus on security, specifically application/product security. Since 2021, I&amp;rsquo;ve been leading application security teams at AWS &amp;amp; Amazon.&lt;/p&gt;&#xA;&lt;p&gt;I hope you enjoy these articles.&lt;/p&gt;</description>
    </item>
    <item>
      <title></title>
      <link>https://www.secdev.uk/blog/technology/2026-04-20-secure-defaults/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-04-20-secure-defaults/</guid>
      <description></description>
    </item>
    <item>
      <title></title>
      <link>https://www.secdev.uk/blog/technology/2026-07-08-reversing-6502-part-2/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2026-07-08-reversing-6502-part-2/</guid>
      <description></description>
    </item>
  </channel>
</rss>
