<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Static Analysis on guy@secdev.uk</title>
    <link>https://www.secdev.uk/blog/tags/static-analysis/</link>
    <description>Recent content in Static Analysis on guy@secdev.uk</description>
    <generator>Hugo</generator>
    <language>en-gb</language>
    <copyright>Guy Dixon | guy@secdev.uk</copyright>
    <lastBuildDate>Sat, 20 Dec 2025 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://www.secdev.uk/blog/tags/static-analysis/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>SAST Tools Compared: What They Catch and What They Miss</title>
      <link>https://www.secdev.uk/blog/technology/2025-12-20-sast-tools-compared/</link>
      <pubDate>Sat, 20 Dec 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-12-20-sast-tools-compared/</guid>
      <description>&lt;p&gt;Static Application Security Testing (SAST) tools are the first line of automated defence against vulnerabilities in source code. They analyse code without executing it, looking for patterns that match known vulnerability classes. But here&amp;rsquo;s the thing, no single tool catches everything, and the differences between tools in detection capability, false positive rates, and language support are significant. I wanted to understand exactly where the gaps are, so I spent time running these tools against intentionally vulnerable code and comparing their output. This post is my honest assessment of what they actually catch, what they miss, and where manual review has to pick up the slack.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
