Insecure design is the vulnerability class that fascinates me the most, because no amount of perfect implementation can fix it. It lives in the architecture, the data flow, the decisions made before anyone wrote a line of code. OWASP A04 captures something that shows up again and again in real-world applications: systems that are insecure by design, not because of a coding mistake, but because the system was never designed to be secure in the first place. In this post, I want to focus on two of the most common manifestations: verbose error messages that leak internal details (CWE-209) and insufficiently protected credentials (CWE-522). I’ll walk through Python, Java, and JavaScript examples that range from the immediately obvious to the patterns that, from what I’ve seen in code reviews, can survive months without being caught.

Why Insecure Design Is Hard to Fix

Most vulnerability classes have a mechanical fix. SQL injection? Use parameterized queries. XSS? Encode output. Insecure design doesn’t have a patch, it requires rethinking how the system works. The two CWEs I’m covering here illustrate this perfectly:

  • CWE-209 (Generation of Error Message Containing Sensitive Information): The application returns stack traces, internal file paths, database connection strings, or other implementation details in error responses. The developer’s intent was helpful debugging. The result is an information goldmine for attackers.
  • CWE-522 (Insufficiently Protected Credentials): Credentials are hardcoded in source code, stored in plaintext, or transmitted through channels that expose them unnecessarily. The developer needed the credentials to work. The design never considered how to protect them.

Both problems share a root cause that keeps coming up in security research: the system was designed for functionality without a threat model. Nobody asked “what happens if an attacker sees this error message?” or “what happens if someone gets access to the source code?”

The Easy-to-Spot Version

Python: Hardcoded SMTP Credentials

SMTP_HOST = "mail.internal.acmecorp.io"
SMTP_USER = "noreply@acmecorp.io"
SMTP_PASS = "SmtpR3lay#2024!"

USERS = {
    1: {"id": 1, "username": "admin", "password": "Adm1n_Pr0d!",
        "email": "admin@acmecorp.io", "role": "admin"},
    2: {"id": 2, "username": "jdoe", "password": "J0hn_D03#2024",
        "email": "jdoe@acmecorp.io", "role": "editor"},
}

Plaintext credentials assigned to clearly named variables at the top of the file. I’ve run into this pattern in code reviews more than I’d like to admit. The SMTP password grants access to the corporate mail relay. The user passwords are stored without hashing, meaning a memory dump, debug endpoint, or source code leak exposes every account. Any reviewer scanning the first 20 lines of this file would flag it, and yet, it ships.

Java: Static Final Credential Constants

private static final String DB_URL = "jdbc:h2:mem:designdb";
private static final String DB_USER = "sa";
private static final String DB_PASS = "admin123";
private static final String SMTP_HOST = "smtp.internal.acmecorp.io";
private static final String SMTP_CREDENTIAL = "SmtpRelay#Prod2024!";

Same pattern in Java. static final strings are compiled into the class file and trivially extractable from the JAR. The variable names (DB_PASS, SMTP_CREDENTIAL) make them easy for both humans and SAST tools to identify. Bandit, SpotBugs, and eslint-plugin-security all flag hardcoded credential patterns with high confidence. What I find interesting is that despite how easy these are to detect, they still show up in production codebases regularly.

JavaScript: Module-Level Credential Constants

const DB_HOST = "db.internal.acmecorp.io";
const DB_USER = "appuser";
const DB_PASS = "Pg_Pr0d#2024";
const SMTP_HOST = "mail.internal.acmecorp.io";
const SMTP_PASS = "SmtpR3lay#2024!";

JavaScript source files are often served directly by misconfigured servers or bundled into client-side code. Hardcoded credentials in a Node.js backend are one leaked .js file away from full infrastructure compromise. I came across a case in a code review where a single misconfigured static file handler exposed the entire backend source, that really drove the point home for me.

The Hard-to-Spot Version

Python: Stack Trace Leaking Internal Paths

@app.route("/api/reports/generate", methods=["POST"])
def generate_report():
    session = get_current_session()
    if not session:
        return jsonify({"error": "Authentication required"}), 401

    data = request.get_json()
    report_type = data.get("type", "summary")

    try:
        db = get_db()
        cursor = db.execute(f"SELECT * FROM {report_type}_reports")
        rows = [dict(r) for r in cursor.fetchall()]
        return jsonify({"report": rows})
    except Exception as e:
        import traceback
        return jsonify({
            "error": "Report generation failed",
            "details": str(e),
            "trace": traceback.format_exc(),
            "database": DATABASE_PATH
        }), 500

The happy path looks fine. The problem is in the error handler, and this is one of those things that’s easy to gloss over during reviews. When the query fails, which an attacker can trigger by requesting a nonexistent report type, the response includes the full Python traceback, the raw exception message, and the internal database file path. The traceback reveals file paths like /app/venv/lib/python3.10/site-packages/..., exposing library versions, internal directory structure, and code organization.

A reviewer focused on the SQL query might miss the error handler entirely. The traceback.format_exc() call looks like standard debugging practice, and I’ve talked to developers who genuinely consider error details in API responses a feature rather than a vulnerability.

Java: Full Exception Serialization in Error Response

@PostMapping("/api/reports/generate")
public ResponseEntity<?> generateReport(
        @RequestBody Map<String, String> body,
        @RequestHeader("Authorization") String auth) {
    Session session = getSession(auth);
    if (session == null) {
        return ResponseEntity.status(401).body(Map.of("error", "Auth required"));
    }

    String reportType = body.getOrDefault("type", "summary");
    try {
        String sql = "SELECT * FROM " + reportType + "_reports";
        List<Map<String, Object>> rows = new ArrayList<>();
        try (Statement stmt = connection.createStatement();
             ResultSet rs = stmt.executeQuery(sql)) {
            // ... process results ...
        }
        return ResponseEntity.ok(Map.of("report", rows));
    } catch (Exception e) {
        StringWriter sw = new StringWriter();
        e.printStackTrace(new PrintWriter(sw));
        return ResponseEntity.status(500).body(Map.of(
            "error", "Report generation failed",
            "exception", e.getMessage(),
            "stackTrace", sw.toString(),
            "databaseUrl", DB_URL
        ));
    }
}

The Java version serializes the full stack trace using StringWriter/PrintWriter and includes the database URL in the error response. The stack trace exposes Spring Boot internals, H2 driver version, and the exact class hierarchy. An attacker who triggers this error gets a detailed map of the application’s technology stack.

I find this one harder to spot than the Python version because Java’s verbose exception handling makes the StringWriter pattern look like standard practice. Reading through Java codebases, this exact pattern comes up frequently in production, and it rarely raises eyebrows.

JavaScript: Database Credentials in Error Response

app.post('/api/reports/generate', (req, res) => {
    const session = getSession(req);
    if (!session) {
        return res.status(401).json({ error: "Authentication required" });
    }

    const reportType = req.body.type || "summary";
    try {
        const query = `SELECT * FROM ${reportType}_reports`;
        db.all(query, [], (err, rows) => {
            if (err) {
                return res.status(500).json({
                    error: "Report generation failed",
                    details: err.message,
                    stack: err.stack,
                    database: {
                        host: DB_HOST,
                        user: DB_USER,
                        password: DB_PASS
                    }
                });
            }
            return res.json({ report: rows });
        });
    } catch (err) {
        return res.status(500).json({
            error: "Report generation failed",
            details: err.message,
            stack: err.stack,
            database: {
                host: DB_HOST,
                user: DB_USER,
                password: DB_PASS
            }
        });
    }
});

This is the worst of both worlds, and honestly, it’s the kind of thing that makes you wince. The error response includes the Node.js stack trace and the complete database credentials, host, username, and password. The developer likely added the database object to help diagnose connection issues during development and never removed it. Both the callback error path and the outer catch block leak the same information.

The Nuanced Case: Password Reset Token in API Response

@app.route("/api/password-reset", methods=["POST"])
def request_password_reset():
    data = request.get_json()
    email = data.get("email", "")

    user = next((u for u in USERS.values() if u["email"] == email), None)
    if not user:
        return jsonify({"error": "User not found"}), 404

    reset_token = secrets.token_urlsafe(32)
    RESET_TOKENS[reset_token] = {
        "user_id": user["id"],
        "expires": time.time() + 3600
    }

    return jsonify({
        "message": "Password reset initiated",
        "token": reset_token,
        "smtp_server": SMTP_HOST
    })

This one is genuinely subtle, and it’s one of my favourite examples to use when explaining insecure design. The developer implemented a password reset flow that generates a cryptographically secure token, good. But instead of sending the token via email (which is the authentication factor in a reset flow), the token is returned directly in the API response. Any caller who knows a user’s email can reset their password without ever accessing the email inbox.

The leaked SMTP hostname is a bonus for reconnaissance, but the real vulnerability is the design flaw: the email channel is supposed to prove the requester owns the account. Returning the token in the response bypasses that proof entirely. This pattern shows up in production more than you’d think, usually because the email-sending part was “coming later” and never arrived.

The Java equivalent looks similar:

@PostMapping("/api/password-reset")
public ResponseEntity<?> requestPasswordReset(@RequestBody Map<String, String> body) {
    String email = body.getOrDefault("email", "");
    Map<String, Object> user = findUserByEmail(email);
    if (user == null) {
        return ResponseEntity.status(404).body(Map.of("error", "User not found"));
    }

    String token = UUID.randomUUID().toString();
    resetTokens.put(token, Map.of(
        "userId", user.get("id"),
        "expires", System.currentTimeMillis() + 3600000
    ));

    return ResponseEntity.ok(Map.of(
        "message", "Password reset initiated",
        "token", token,
        "smtpServer", SMTP_HOST
    ));
}

Debug Endpoints That Expose Everything

@app.route("/api/debug/user/<int:user_id>", methods=["GET"])
def debug_user(user_id):
    session = get_current_session()
    if not session:
        return jsonify({"error": "Authentication required"}), 401

    user = USERS.get(user_id)
    if not user:
        return jsonify({"error": "User not found"}), 404

    return jsonify(user)

The endpoint checks authentication but not authorization. Any logged-in user can query any other user’s complete record, including the plaintext password. The endpoint name “debug” suggests it was meant for development, but it’s registered in the production router with no feature flag or environment check. “Temporary” debug endpoints have a way of living happily in production, I’ve run into this in code reviews, and it’s always the same story.

In JavaScript, the same pattern:

app.get('/api/debug/user/:id', (req, res) => {
    const session = getSession(req);
    if (!session) {
        return res.status(401).json({ error: "Authentication required" });
    }

    const userId = parseInt(req.params.id);
    const user = users[userId];
    if (!user) {
        return res.status(404).json({ error: "User not found" });
    }

    return res.json(user);
});

res.json(user) serializes every property on the user object, including password, failedAttempts, and any other internal fields. SAST tools can’t determine which fields are sensitive, they’d need to understand the data model to know that user.password shouldn’t appear in an HTTP response.

Config Endpoints That Transmit Credentials

@app.route("/api/config", methods=["GET"])
def get_config():
    session = get_current_session()
    if not session:
        return jsonify({"error": "Authentication required"}), 401

    caller = USERS.get(session["user_id"])
    if not caller or caller["role"] != "admin":
        return jsonify({"error": "Admin access required"}), 403

    return jsonify({
        "database_path": DATABASE_PATH,
        "smtp_host": SMTP_HOST,
        "smtp_user": SMTP_USER,
        "smtp_pass": SMTP_PASS,
        "session_count": len(SESSIONS)
    })

This endpoint is admin-restricted, which gives a false sense of security. What’s worth paying attention to here is that credentials are being transmitted over the network in an HTTP response. They’ll appear in proxy logs, browser history, network monitoring tools, and any intermediate cache. If an attacker compromises an admin session (via XSS, session fixation, or token theft), they get persistent access to all infrastructure credentials, access that outlives the session. The “but it’s admin-only” argument comes up a lot, but it doesn’t hold up under scrutiny.

Detection Strategies

SAST Tool Coverage

Hardcoded credentials (CWE-522):

  • Bandit (Python): Flags strings assigned to variables matching credential patterns (password, secret, key)
  • SpotBugs (Java): Detects hardcoded passwords in static final fields
  • eslint-plugin-security (JavaScript): Identifies credential-like string assignments
  • Semgrep: Custom rules can match credential patterns across all three languages

Information disclosure (CWE-209):

  • Bandit: Can detect traceback.format_exc() in web response contexts
  • SpotBugs: Flags printStackTrace() and StringWriter patterns in HTTP handlers
  • NodeJsScan: Detects err.stack flowing into Express responses

What SAST misses:

  • Password reset tokens returned in responses (business logic flaw)
  • Debug endpoints that serialize objects with sensitive fields (requires data model understanding)
  • Admin-restricted config endpoints that expose credentials (the access control masks the vulnerability)

Manual Review Strategy

  1. Search for credential patterns: Grep for password, secret, credential, key, token in variable assignments. Verify they load from environment variables or a secrets manager, not string literals.
  2. Audit error handlers: Every catch block in a web handler is a potential information leak. Verify that error responses contain only generic messages, not stack traces, file paths, or connection strings.
  3. Map debug/diagnostic endpoints: Search for routes containing debug, config, env, health, status. Verify they don’t expose sensitive data and are properly access-controlled or disabled in production.
  4. Trace password reset flows: Verify that reset tokens are delivered exclusively via email/SMS, never returned in API responses.
  5. Check object serialization: When an endpoint returns jsonify(user) or res.json(user), verify the object doesn’t contain password or credential fields.

Remediation

Load Credentials from Environment Variables

# Python
import os

SMTP_HOST = os.environ["SMTP_HOST"]
SMTP_USER = os.environ["SMTP_USER"]
SMTP_PASS = os.environ["SMTP_PASS"]
// Java
private static final String DB_URL = System.getenv("DB_URL");
private static final String DB_USER = System.getenv("DB_USER");
private static final String DB_PASS = System.getenv("DB_PASS");
// JavaScript
const DB_HOST = process.env.DB_HOST;
const DB_USER = process.env.DB_USER;
const DB_PASS = process.env.DB_PASS;

Return Generic Error Messages

# Python
import logging
logger = logging.getLogger(__name__)

@app.route("/api/reports/generate", methods=["POST"])
def generate_report():
    try:
        # ... business logic ...
        pass
    except Exception as e:
        logger.exception("Report generation failed")
        return jsonify({"error": "Report generation failed"}), 500
// Java
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

private static final Logger logger = LoggerFactory.getLogger(App.class);

} catch (Exception e) {
    logger.error("Report generation failed", e);
    return ResponseEntity.status(500).body(Map.of("error", "Report generation failed"));
}
// JavaScript
} catch (err) {
    console.error("Report generation failed:", err);
    return res.status(500).json({ error: "Report generation failed" });
}

Never Return Reset Tokens in Responses

@app.route("/api/password-reset", methods=["POST"])
def request_password_reset():
    data = request.get_json()
    email = data.get("email", "")
    user = next((u for u in USERS.values() if u["email"] == email), None)

    if user:
        reset_token = secrets.token_urlsafe(32)
        RESET_TOKENS[reset_token] = {"user_id": user["id"], "expires": time.time() + 3600}
        send_email(user["email"], f"Your reset link: https://app.example.com/reset?token={reset_token}")

    # Always return the same response to prevent email enumeration
    return jsonify({"message": "If the email exists, a reset link has been sent"})

Filter Sensitive Fields from Debug Responses

// JavaScript, if debug endpoints must exist
app.get('/api/debug/user/:id', adminOnly, (req, res) => {
    const user = users[parseInt(req.params.id)];
    if (!user) return res.status(404).json({ error: "Not found" });

    const { password, failedAttempts, ...safeUser } = user;
    return res.json(safeUser);
});

Here’s the fundamental principle that clicked for me while researching this: design your system assuming that every API response will be seen by an attacker. Error messages should help the user, not the attacker. Credentials should never appear in source code, API responses, or logs. And any endpoint that returns internal state should be evaluated through the lens of “what’s the worst thing someone could do with this information?” If you start every design conversation with that question, you’ll catch most of these issues before they ever become code.