← Back to all campaigns

Newsletter Snippet: AI Writes Bugs

Section: Weekly Intelligence Brief Position: Lead story or featured insight

---

🚨 The AI Security Paradox: When Your Coding Assistant Becomes Your Biggest Risk

The headline: Claude Opus 4.6 co-authored a $1.8M vulnerability in Moonwell (DeFi lending protocol). Exploited one block after deployment.

Why it matters: This is the first confirmed seven-figure loss from AI-generated Solidity code. A serial oracle exploiter has now stolen $3.5M+ from 5+ DeFi protocols using similar patterns.

The paradox: The same week, Anthropic published that Claude Opus 4.6 found 22 vulnerabilities in Firefox (14 high-severity, 1 zero-day) for just $4,000 in API credits. AI can find bugs 10-50x cheaper than humans... and create them 10x faster.

What's changing:

  • Bug bounty platforms are deploying anti-AI-spam measures (Sherlock: 250 USDC stake, Code4rena: signal metric caps)
  • "AI-generated code audit" is emerging as its own service category
  • Discovery is commoditizing; economic reasoning is the new moat
  • Our take: Every team using AI to write smart contracts needs AI-powered validation at deployment speed. Traditional audit cycles (weeks/months) can't protect against same-day deploys. The Moonwell exploit happened in 12 seconds.

    What we're building: DeepThreat now has 15 specialized scanners including oracle feed validation, AI code pattern detection, and supply chain analysis. Our EVMbench detection rate (82.6%) beats the GPT-5.3-Codex baseline (72.2%). Zero-cost local scanning via VulnLLM-R means unlimited audits at $0.

    Read the full case study: [Why Claude Opus 4.6 Couldn't Catch Its Own Oracle Bug →]

    ---

    *From the Gilchrist Research Intelligence Desk — tracking DeFi exploits, AI security, and the future of autonomous auditing.*