<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>CWE-522 on guy@secdev.uk</title>
    <link>https://www.secdev.uk/blog/tags/cwe-522/</link>
    <description>Recent content in CWE-522 on guy@secdev.uk</description>
    <generator>Hugo</generator>
    <language>en-gb</language>
    <copyright>Guy Dixon | guy@secdev.uk</copyright>
    <lastBuildDate>Sat, 15 Mar 2025 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://www.secdev.uk/blog/tags/cwe-522/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Insecure Design</title>
      <link>https://www.secdev.uk/blog/technology/2025-03-15-insecure-design/</link>
      <pubDate>Sat, 15 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://www.secdev.uk/blog/technology/2025-03-15-insecure-design/</guid>
      <description>&lt;p&gt;Insecure design is the vulnerability class that fascinates me the most, because no amount of perfect implementation can fix it. It lives in the architecture, the data flow, the decisions made before anyone wrote a line of code. OWASP A04 captures something that shows up again and again in real-world applications: systems that are insecure by design, not because of a coding mistake, but because the system was never designed to be secure in the first place. In this post, I want to focus on two of the most common manifestations: verbose error messages that leak internal details (CWE-209) and insufficiently protected credentials (CWE-522). I&amp;rsquo;ll walk through Python, Java, and JavaScript examples that range from the immediately obvious to the patterns that, from what I&amp;rsquo;ve seen in code reviews, can survive months without being caught.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
